CES 2026 closed on January 9, 2026. Over three days in Las Vegas, more than 4,100 exhibitors showed what they’ve been building, from large public companies like Google, Amazon, Samsung, and Sony to promising early-stage startups.
At BasicAI, we follow this event closely. We were glad to see several of our customers showcasing their latest work, and we congratulate all the teams recognized in this year's CES Innovation Awards.
In this post, we want to share 15 computer vision AI use cases from the 2026 award winners and honorees. These cases reflect the current state of commercial AI and hint at what's coming next.
CES 2026 and the CES Innovation Awards
CES (Consumer Electronics Show) is one of the largest and most influential annual technology events in the world. Organized by the Consumer Technology Association (CTA), it takes place every January in Las Vegas.

Since its launch in 1967, CES has evolved from a showcase for televisions and radios into a global benchmark for emerging technologies spanning artificial intelligence, automotive, digital health, and beyond.
The CES Innovation Awards are CTA’s annual program recognizing outstanding design and engineering in consumer technology. The award carries significant weight in the industry and is widely regarded as a stamp of approval for the year's best products.
2026 is widely seen as a breakout year for Physical AI, and CES reinforced that view. Many award-winning products run on local AI chips for real-time inference. Computer vision applications were especially prominent.
RAPA: Autonomous driving perception with multiple 4D imaging radars
2026 Best of Innovation in Artificial Intelligence By Deep Fusion AI

4D imaging radar is gaining traction as a promising sensor option in autonomous driving perception stacks. It can approach LiDAR-like spatial resolution at a much lower cost, while naturally providing velocity and working in all weather. But radar point clouds are sparse, and multipath interference is a persistent limiter.
RAPA takes a software-defined approach to fusing multiple 4D radars. It learns radar-signal physics for adaptive filtering, then uses an attention-based deep learning model trained on its own dataset to deliver real-time, high-precision detection and tracking. It is designed to run efficiently on edge embedded platforms.
This work supports the engineering feasibility of radar-only perception. If 4D radar could carry primary perception on its own, it would significantly reduce the BOM cost of autonomous driving systems and open the door to scaled deployment in cost-sensitive and harsh-environment verticals such as unmanned surface vessels, and robotics.
VIXallcam: All-weather vision enhancement for commercial vehicles
2026 Honoree in Smart Communities By IntelliVIX Co.,Ltd

In commercial vehicle operations, degraded visibility in severe weather is a major contributor to crashes.
VIXallcam is an AI vision camera built for long-haul trucks, mountain routes, and special-purpose vehicles. It keeps delivering a clear view in dense fog, heavy rain, blizzards, tunnels, and even complete darkness where standard cameras fail.
The system detects pedestrians, vehicles, and road obstacles up to 200 meters ahead, buying reaction time. It adapts automatically to changing weather without manual tuning.
Logistics accidents are expensive. A few seconds of earlier risk exposure can translate into measurable reductions in incident rates. VIXallcam fills an ADAS gap in edge conditions, helping fleets keep night and bad-weather schedules without trading away safety.
Argus-D: Multimodal disaster warning on smart cameras
2026 Honoree in Artificial Intelligence, Products in Support of Human Security for All By IIST Co., Ltd

As extreme climate events become more frequent, traditional security cameras that focus on post-incident evidence are no longer enough. Sending every video stream to the cloud for inference is also costly and fails under outages and congestion.
Argus-D embeds Physical AI and multimodal sensing into a standard surveillance-camera form factor. It detects wildfires, building collapses, and earthquakes in real time, with seamless integration into smart IoT infrastructure.
For fire detection, it reports accuracy above 99%. In earthquake scenarios, it uses P- and S-wave information to estimate direction and distance to the source, then triggers faster response loops through IoT coordination.
Argus-D is a concrete example of edge intelligence landing in public safety. By pushing multimodal perception and inference into the camera, it can fire alerts on millisecond timescales, creating a critical window for evacuation and emergency response.
Real-time drunk driving detection from driver behavior
2026 Honoree in Vehicle Tech & Advanced Mobility By Smart Eye AB

Drunk driving is not always visible to external sensors, and traditional tests (breath, blood) don’t fit into everyday driving workflows.
Smart Eye added alcohol impairment detection to its commercial driver monitoring system. It is positioned as the first mass-production solution in the industry that infers alcohol impairment from real-time behavior analysis rather than physiological sampling.
The system analyzes subtle patterns in eye movement and eyelid dynamics to estimate whether the driver is under influence. The timing also matches a regulatory shift, with programs such as Euro NCAP bringing impairment detection into evaluation criteria.
Drunk driving causes more than 12,000 deaths per year in the US. Embedding detection into passive driver monitoring changes what’s practical for commercial fleets and may also shape future insurance pricing models built around non-intrusive risk signals.
AA-2: An indoor delivery robot that can ride elevators
2026 Honoree in Robotics By GoLe-Robotics

The growth of late-night delivery has created new risks for high-end apartments: driver fatigue, security exposure, elevator congestion, and privacy concerns for residents.
AA-2 is an autonomous delivery robot designed for gated residential communities. Through integration with the EV-1 elevator interface, it can call an elevator and ride it autonomously.
The robot uses flexible materials to absorb impact if it contacts residents or objects. After delivery, it can deflate for compact storage, and when it returns to the charging station it recharges both the battery and the airbag system.
Most delivery robots have focused on the outdoor “last mile.” AA-2 reflects product thinking around the “last 100 meters.” Indoor vertical and horizontal mobility, integration with building systems, and safe operation in private spaces force a different set of engineering trade-offs. Vision and perception here bias toward close-range obstacle avoidance, semantic scene understanding, and safe human-robot coexistence.
AEON: A collaborative humanoid robot for industrial sites
2026 Honoree in Robotics By Hexagon

Aging workforces and structural labor shortages are pushing industry to revisit the practical value of humanoid robots.
AEON is designed to work alongside human workers, not replace them. Wheeled mobility improves efficiency and runtime, combined with multi-sensor fusion and spatial intelligence for navigation and manipulation. Its dexterous hands and skills cover machine tending, inspection, asset capture, and digital modeling, with support for teleoperation and assisted decision-making.
Notably, the robot's appearance and behavior have been refined with psychological research to improve human acceptance.
Humanoids have drawn heavy capital in recent years, but many products remain in demo stage. AEON signals a shift from lab proof points toward industrial deployment. The “collaborator” positioning also avoids the social backlash implied by replacing workers, and that pragmatic product framing is worth studying.
Bedivere: Autonomous navigation robot for the visually impaired
2026 Honoree in Artificial Intelligence, Products in Support of Human Security for All By AidALL Inc.

Millions of people with visual impairment face ongoing challenges in independent mobility. For this population, the visual function needed is not positioning but continuous, safe walking.
Bedivere is a portable autonomous navigation robot that runs environmental perception and path planning on-device. Local AI interprets obstacles and free space in real time, produces an actionable safe route, and works fully offline, avoiding GPS drift, signal loss, and privacy constraints.
Guide dog training takes up to two years and costs are high. Global supply falls far short of demand. Bedivere aims to approach guide-dog utility without training or long-term care. It is lightweight, quick to learn, and designed for complex indoor and outdoor environments.
Instead of chasing a general-purpose humanoid narrative, Bedivere focuses on a sharply defined user need. This makes productization and scale more achievable and makes it a meaningful embodied AI attempt in accessibility.
SafeZone: Vision AI for bus door safety
2026 Honoree in Vehicle Tech & Advanced Mobility By oToBrite Electronics, Inc.

Bus door pinch injuries look rare at the individual level, but globally they drive real harm and litigation. Traditional pneumatic anti-pinch systems have blind spots and struggle with soft objects like clothing, backpacks, or limbs.
SafeZone uses a single-camera module plus an ECU. A deep learning model monitors the door area in real time and prevents closure while passengers are boarding or exiting. It covers a 30×200 cm detection region. The tight integration also makes it applicable to cranes, forklifts, garbage trucks, and other industrial equipment with pinch hazards.
SafeZone pushes computer vision into an overlooked safety niche. A cost-controlled, retrofit-friendly vision module can reduce incident rates and provide a general safety layer for vehicle intelligence upgrades, aligned with broader smart transportation infrastructure trends.
Multi-tron: AI waste sorting at collection points
2026 Honoree in Smart Communities By Aetech

Conventional waste sorting facilities are hard to place inside cities due to noise, footprint, and infrastructure demands. Waste often travels long distances to centralized plants, adding carbon emissions and increasing pollution risk.
Multi-tron changes the layout by integrating pre-processing and AI sorting into a compact, modular unit that can be deployed in mixed-use buildings, public facilities, or even outdoor event sites.
The equipment line is 30% shorter than conventional solutions. Low noise and clean industrial design reduce the visual and psychological friction of installing such systems in urban environments, enabling earlier separation at the source for higher-purity recyclables.
By moving sorting to the point of waste generation, Multi-tron changes the economics of the recycling value chain. The distributed-infrastructure model is replicable, especially for communities and developing regions without strong centralized facilities.
Family Hub: An AI vision refrigerator
2026 Honoree in Smart Home By Samsung Electronics America

Food recognition in a fridge fails on the long tail. Fresh ingredients, packaging variance, occlusions, changing lighting, and user placement habits break closed-category models quickly.
Samsung’s four-door refrigerator is the first of its kind to integrate a multimodal large language model. Its built-in AI Vision system can recognize an unlimited range of ingredients and keep a live food list, covering fresh items, packaged goods, and prepared foods. It then turns recognition into recipes, shopping lists, and energy management through recommendations and dialogue, using on-device AI and smart home integration to complete the household interaction loop.
This product marks the point where LLMs and vision AI move into major appliances. “Unlimited items” recognition depends on vision-language generalization. From a consumer electronics view, Samsung is redefining the refrigerator from passive storage into an active household management surface, with downstream implications for food retail and health data services.
Selto: A general automation agent built on visual UI understanding
2026 Honoree in Artificial Intelligence By INFOFLA

Many government websites, legacy systems, and enterprise intranets lack stable APIs. Traditional RPA relies on scripts and fixed coordinates, and it breaks when the UI changes.
Selto treats the UI as an environment. A vision-language model perceives interface elements and performs clicks, typing, and decisions like a human, enabling end-to-end task execution. It also self-learns from task logs to reduce configuration cost, and it supports both cloud and on-prem deployments for security and compliance.
It shifts computer vision targets from the physical world to pixel-based digital interfaces, and it makes interaction the output. This is a high-value CV domain with low sensor noise but high distribution shift. It will push work on UI understanding, temporal planning, controllability, and failure recovery.
TlatFarm: Drone-driven autonomous smart agriculture
2026 Honoree in Construction & Industrial Tech By Turbine Crew Inc.

Precision agriculture has proven potential, but in remote farms with weak infrastructure it is often constrained by power availability, connectivity, and operational complexity.
TlatFarm combines autonomous drone flight, wireless charging, multispectral crop monitoring, and edge AI analysis into a turnkey system. Drones run multiple missions per day, capturing RGB, NDVI, infrared, and multispectral imagery, while also performing spraying tasks. AI models process imagery and in-field sensor data in real time to predict pests, nutrient issues, and optimal irrigation and harvest timing, with accuracy up to 92%.
Agriculture is a domain where spatiotemporal data density sets the ceiling. Low-cost, high-frequency, consistent multimodal observation determines whether CV models can move from recognition to prediction. Closing the loop across capture, power, and analysis also helps AI move beyond demo plots into regions with limited infrastructure.
SHOSABI: 3D motion sensing with learning feedback
2026 Honoree in Sports & Fitness By SHOSABI inc

Most fitness and rehab devices focus on strength or surface physiological metrics, while ignoring brain-body coordination, which sits closer to the foundation of human performance.
SHOSABI is a training tool built on a patented 3D motion sensing technology that captures over one million 3D data points per second. It objectively evaluates coordination, stability, and left-right balance, then generates personalized training plans. Its adaptive feedback engine switches between voice, visual guidance, or immersive 3D rotational instruction based on user state, optimizing the cognitive-motor learning process.
Behind SHOSABI is more than a decade of research from the University of Tokyo and Mitsubishi Chemical Group, plus over ten licensed patents. It effectively defines a new motion science subcategory.
Competitive sports and aging health are both recognizing the value of coordination training, but objective measurement has been limited. SHOSABI packages high-precision motion sensing and AI feedback in a consumer product form. It can serve professional athletes optimizing performance and ordinary people preventing musculoskeletal decline. Its data assets may also catalyze a new generation of AI applications based on movement intelligence.
TORAH VISION AI: 16-bit high-resolution chest X-ray decision support
2026 Honoree in Artificial Intelligence By Torah Co., ltd

Chest X-ray remains one of the most common imaging exams, but early lesions can be subtle, low-contrast, and highly reader-dependent.
TORAH VISION AI uses high-definition Torah data and a ResNet50-based deep learning model to analyze chest X-rays automatically. It can identify 14 common thoracic pathologies as defined by the ChestX-ray14 dataset, including cardiomegaly.
The system outputs AI findings with corresponding clinical recommendations, integrates with the Medidata platform for a precision diagnostic workflow, and uses Biovia solutions to improve training data quality.
The core proposition here is treating resolution, dynamic range, annotation quality, and deployment platform as equally important system components. This preserves more grayscale detail, helping models capture subtle pathological signs. This is a technical direction worth watching in medical imaging AI.
Strutt ev¹: A personal mobility device with environmental perception
2026 Best of Innovation in Vehicle Tech & Advanced Mobility, 2026 Honoree in Accessibility & Longevity By Strutt Pte. Ltd.

In mixed indoor-outdoor use, risk for personal mobility devices comes from more than speed. Tight spaces, obstacles, and social interaction with crowds matter just as much.
Strutt ev¹ brings LiDAR and AI algorithms, derived from autonomous driving, into a personal mobility device. Its Co-Pilot system continuously senses environmental complexity and adjusts trajectory in real time to smooth bumps and avoid collisions with walls, furniture, or pedestrians. Natural-language voice interaction reduces the need for menu navigation.
Strutt ev¹ pushes perception and control into a category that has long remained low intelligence, offering safer mobility options for those with limited mobility and the elderly. At an industry level, it is also a clear example of autonomy tech spillover. As LiDAR and perception costs keep falling, they can enter price-sensitive consumer products and expand the small mobile robot market.
What these 15 cases suggest
AI's center of gravity is shifting from general-purpose foundation models flexing their capabilities toward vertical, specialized applications.
Across these award-winning cases, edge intelligence and offline availability appear to be becoming standard for many scenarios, especially in industrial and safety systems. The definition of perception is also expanding from vision alone to various forms of multi-modal fusion for more complete world understanding.
AI is moving into fragmented scenarios that traditional automation struggled to cover because the environment is non-standard and the logic is messy, such as elevator-integrated delivery robots, source point waste sorting, and system UI operation.
With these trends, training data is evolving from a pursuit of volume toward high-value long-tail data and synthetic or physics-based data. The most immediate impact is data labeling for extreme and rare conditions. In cases like VIXallcam, general datasets offer limited value. The competitive edge lies in those who possess scarce data like "trucks in a blizzard."
Annotations are also moving from semantic labels to behavioral labels. A bounding box around a car is not enough for Smart Eye’s impairment detection or SHOSABI’s coordination assessment. The label needs to capture fine-grained temporal signals such as micro-patterns in eye dynamics or biomechanical features in 3D space.
That kind of data annotation often requires domain experts in medicine or physics, and it does not scale through low-cost crowdsourcing. Expert knowledge density becomes part of the dataset’s core value.
In the next phase of AI competition, the advantage is less about parameter count but more about who can collect, validate, and maintain high-quality data that encodes physical logic, covers extreme edge cases, and is confirmed by experts, while keeping cost under control.
If your team is preparing data for a specialized AI vision application, let's talk about training data solutions. We provide expert-in-the-loop data annotation services and smart data labeling tools for many leading AI teams.





