top of page

Case Studies

ADAS Clearly Explained: Functions, Algorithms, Sensors, and Data

Comprehensive guide to ADAS: sensors, perception algorithms and data annotation. Covers 11 key functions from FCW to APA with implementation tips.

11

min

Author Bio

Admon W.

On April 30, 2021, SAE International updated its taxonomy for driving automation systems. As in the 2018 version, SAE keeps 6 levels from no driving automation (L0) to full automation (L5), describing how much a vehicle can replace human drivers in perception, decision-making, and control.

L1 and L2 are now called "Driver Support Systems," while L3 through L5 remain "Automated Driving Systems" – drivers don't need to operate the vehicle when the system is engaged.


Levels of Driving Automation. SAE International.

The former category generally maps to ADAS (Advanced Driver Assistance Systems) and is now standard in passenger cars, with especially high penetration in mid-to-high trims and EVs. It keeps expanding its operational scenarios and robustness.

In this post, we'll explore core ADAS concepts, along with the algorithms and data that power them. If you want a clearer view of ADAS functions or plan to build relevant algorithms, this guide should provide valuable insights.

What is ADAS (Advanced Driver Assistance Systems)?

ADAS comprises a set of functions that work through sensors, controllers, and human-machine interfaces to reduce risk, lighten driver workload, and improve comfort in specific driving scenarios.

Typical features include forward collision warning with automatic emergency braking, adaptive cruise control, lane departure warning with lane keeping assist, blind spot monitoring, traffic sign recognition, and automated parking.

All share one trait: assist rather than replace. The system aids perception and control under certain conditions, but the driver monitors the environment and remains responsible, with hands-on readiness to take over. Implementation focuses on functional safety, redundancy, and robust human-machine interaction.

ADAS vs Autonomous Driving

Autonomous driving, corresponding to L3-L5, represents higher-level system capabilities. The goal is to have machines handle dynamic driving tasks across broader operational design domains.

From L3, the system performs longitudinal and lateral control and monitors the environment, assuming decision responsibility under conditions. At L4, vehicles can run driverless within their applicable domains. L5 aims for full automation across all scenarios.

Compared to ADAS, autonomous driving relies heavily on high-fidelity environment modeling, prediction and planning, edge–cloud collaboration, redundant actuation and safety architecture, and systematic handling of long-tail events. Responsibility and operations shift from “driver responsible, feature assistive” to “system responsible, scene-limited or broad takeover.”

Simply put, ADAS is a stepping stone toward autonomous driving, while the latter represents systematic capability and service forms that replace human driving within defined operational domains.

Perception System for ADAS

Both ADAS and autonomous driving largely rely on the perception-localization-decision-control loop, with perception as the primary component.

ADAS perception systems typically combine multiple sensors to cover target detection and scene understanding needs across different distances, fields of view, and weather/lighting conditions. Sensor fusion enhances robustness and redundancy.

Common stacks include cameras, millimeter-wave radar, LiDAR, ultrasonic, plus high-precision positioning and map assistance. Different vehicles and market positions make trade-offs in quantity and specifications.


An Example of Sensors Used for ADAS Perception

Cameras

Cameras provide the richest information, typically denoted as V (Video). They detect lanes, traffic signs and signals, pedestrians and vehicles, traffic light states, road edges, and drivable areas.

Production vehicles usually employ front-facing monocular or stereo cameras, combined with surround view (four to six wide-angle cameras) for low-speed panoramic perception. Some add side/rear telephoto for lane change and rear-approach awareness.

Cameras are precise and low-cost in good light, but degrade with glare, night, rain/snow/fog, and occlusion/soiling. Distance estimation accuracy is also limited, requiring complementary sensors.

LiDAR

LiDAR is increasingly common in high-end ADAS and vehicles transitioning to L3/L4. It provides precise range and 3D shape, works in low light and some adverse weather, and helps with small distant objects, odd obstacles, and curb edges.

Common setups include roof or grille-mounted mid-to-long-range LiDAR, supplemented by blind-spot LiDAR in fenders or headlight areas. Cost, form factor integration, and contamination sensitivity are production trade-offs. Some L2 systems still choose "vision + radar" solutions for cost control.

Radar

Radar (R) in sensor suites includes millimeter-wave and ultrasonic radar.

mmWave radar excels at range and velocity measurement with minimal weather impact. It suits mid-to-long-range forward object detection and relative velocity measurement. It is key for ACC, FCW, and AEB.

Typical layouts use one long-range front radar plus 2–4 mid/short-range corner radars for front/rear flanks, enabling lane change assist, blind spot monitoring, and rear cross traffic alert. Resolution is lower than cameras, with difficulty on small, non-metallic, or static objects. So it may require fusion to improve target classification and track stability.

Ultrasonic radar handles near-field sensing, with typical ranges from tens of centimeters to several meters. It mainly supports automated parking, low-speed surround fusion, and near-field collision prevention.

Vehicles typically mount 8-12 units around front/rear bumpers and corners, detecting low obstacles and corners but unable to provide high-precision shape and classification information.

Sensor Fusion

Sensor fusion is the critical software layer in ADAS perception systems.

Low-level fusion aligns time and coordinates, associating radar tracks with visual detection boxes, forming stable target trajectories. High-level fusion further integrates semantic segmentation, free space, and dynamic target prediction, outputting unified environmental models for control and planning.

To meet functional safety requirements, systems design multi-source redundancy and health monitoring, including cross-checking between critical channels, sensor self-diagnosis, and degradation strategies. This ensures predictable function degradation rather than sudden failure when individual sensors fail or environmental conditions deteriorate.

Different manufacturers choose paths from vision-only to multi-sensor fusion based on cost targets, functional positioning, and regulatory requirements.

11 ADAS Functions and Algorithm Paths

ADAS covers longitudinal, lateral, interaction, and driver monitoring aspects.

In this section, we'll introduce 11 ADAS-related functions, focusing on algorithm implementation approaches and key considerations.

FCW (Forward Collision Warning) and AEB (Autonomous Emergency Braking)

FCW detects forward obstacles and computes collision risk, warning drivers when danger arises.

The core lies in object detection, tracking, and TTC (Time to Collision) accuracy—not just detecting vehicles, but determining which ones pose collision risks and when to alert.

Vision struggles with small distant targets and depth. Practice often uses radar fusion to improve ranging accuracy, while pure vision needs explicit depth heads or IPM (Inverse Perspective Mapping) with lane constraints. TTC calculation must consider acceleration changes, typically using Kalman filtering to smooth trajectories and reduce false positives.

Training data must cover various weather and lighting conditions, especially nighttime taillights and rain reflections. Curves require trajectory prediction integration, while small targets like motorcycles need enhanced sample weighting during training.


Forward Collision Warning

AEB adds active braking control to FCW, automatically applying brakes when collision is imminent. Key aspects include multi-level trigger threshold design and false activation suppression.

Technical challenges involve decision-making in complex scenarios, such as when leading vehicles change lanes to reveal a stationary object, or reflective cones in work zones. Many systems use scene classifiers to adapt thresholds and prioritize evidence (e.g., critical radar TTC vs. high-risk vision prediction).

BSW (Blind Spot Warning)

BSW alerts on targets in the rear-side blind zone. It doesn't decide lane changes but provides presence and relative risk indication.

Most production systems primarily use corner radar, but vision-only is replacing radar in some designs. The core challenge is object detection in fisheye images. Technical solutions include distortion-aware data augmentation, spherical convolution, or unified BEV representation.

Object tracking algorithms must handle target transitions between cameras. When vehicles move from rear-view to side-view camera fields, ID consistency must be maintained. ReID features from multi-object tracking are particularly important here, but require attention to feature alignment across different cameras.

Training data should cover highway lane changes and urban merging scenarios, with clear annotation rules for partial occlusions. Motorcycles weaving through traffic are a common failure case. Annotation dimensions should include "blind zone sector entry/exit time" and "lane relationship" labels to help temporal stability.

LDW (Lane Departure Warning) and LKA (Lane Keeping Assist)

LDW alerts drivers during unintentional lane departures. Beyond accurate line detection, the hard part is separating intended lane changes from drift.

Deep learning methods perform well on structured roads, but construction markings, worn lines, and rain reflections require temporal information and vehicle models for robustness. Departure judgment can't rely solely on vehicle-to-lane distance. It must consider lateral velocity and yaw angle.

TLC (Time to Line Crossing) is a common metric, but its calculation requires accurate vehicle kinematics models. Practice requires distinguishing conscious lane changes (turn signal activated) from unconscious departures, integrating multiple vehicle state signals.

Data must encompass various road types: highways, urban roads, rural roads. Special attention to abnormal marking scenarios like Y-junctions and merge/diverge areas is crucial. Data annotations should include not just lane line positions but also types (solid, dashed, double yellow) because logic depends on type.


Lane Keeping Assist

LKA adds active steering control to LDW, helping vehicles maintain center lane position. Common LKA implementation issues aren't perception-related. Driver interaction is key, requiring accurate active steering intention recognition. Road curvature estimation accuracy directly impacts control effectiveness. Map information fusion or extended forward vision distance can help.

ACC (Adaptive Cruise Control)

ACC maintains set speed or headway while following.

Target selection logic is ACC's key component, requiring main target identification from multiple detected objects. This considers not just distance but also predicted motion trajectories. Cut-in handling is especially important, requiring timely recognition and tracking target switching. Occupancy grid or BEV methods help with multi-target scenes.

An often-overlooked point is that headway preferences vary by region and regulation. European and American markets prefer larger following distances, while Asian markets may require tighter following. Data collection should cover various traffic densities, especially frequent acceleration/deceleration in congested conditions.

TJA (Traffic Jam Assist) / HWA (Highway Assist)

TJA/HWA orchestrates ACC, LKA, AEB, and more for low-speed queues and highway cruising. Integration is far more complex than stacking features. Subsystems can't decide in isolation. They must share perception results and prediction information.

For example, when LKA detects construction zone detours, ACC must adjust the following strategies accordingly. This typically involves building unified occupancy grid maps or object-level maps. Decision layers often employ behavior trees or finite state machines to manage different function activation conditions and priorities.

ODD (Operational Design Domain) definition is crucial. Systems must clearly identify whether current conditions permit activation: structured roads, clear lane markings, moderate traffic density, etc.

Data needs grow sharply, covering not just individual function corner cases but also inter-function interaction scenarios. Long, continuous drives with behavior phase labels (follow, queue, lane change, yield to emergency vehicles) enable supervised or weakly supervised orchestration learning.

Simulation testing plays an important role in traversing various combination scenarios. But attention to sensor model realism in simulation environments is crucial, especially performance degradation in adverse weather.

TSR (Traffic Sign Recognition)

TSR recognizes road traffic signs and alerts drivers or controls vehicle speed. Main challenges include sign diversity, occlusion, fading, and different national standards.

Detection and classification typically use two stages, though end-to-end solutions like YOLO variants are popular. Key technical points include multi-scale detection (large near-far sign size differences), combined color-shape features, and temporal consistency verification. Applications must maintain sign state history to avoid recognition flickering.


Traffic Sign Recognition

Training data must cover various national standards, including temporary signs and variable LED signs. Data augmentation should simulate fading, occlusion, motion blur and other degradations. High-precision maps can provide prior information but must handle outdated map data.

DMS (Driver Monitoring System)

DMS analyzes facial features and driving behavior to assess fatigue levels and trigger alerts or function restrictions. Unlike other ADAS functions, it focuses on the driver, not the road.

Vision-based solutions typically use near-infrared cameras to reduce lighting effects. Key features include eyelid closure (PERCLOS), yawning frequency, and head pose. But simple feature thresholds have limited accuracy. Modern solutions employ deep learning for temporal feature extraction. Facial keypoint detection provides more accurate head pose, while attention mechanisms automatically learn different feature importance.

Data collection challenges stem from subjective fatigue state annotation. Lab-induced fatigue differs from real driving. Some research uses EEG and other physiological signals as ground truth, but this isn't practical for large-scale data collection. Additionally, handling occlusions from glasses, sunglasses, and masks requires special attention, potentially needing multi-modal fusion or dedicated robustness design.

APA (Automated Parking Assist)

APA uses surround cameras and ultrasonics to detect spaces, plan paths, and control the vehicle. Parking needs precise near-field perception and complex path planning.

Space detection benefits from multi-sensor fusion: ultrasonics for distance, surround cameras for semantics. BEV-based unified representation shows clear advantages, enabling direct spot detection and drivable area segmentation in bird's-eye view. Challenges include handling various non-standard spots: angled, irregular shapes, and obstacles like parking locks.


Automated Parking Assist

APA algorithm training data must include various spot types and surrounding environments. Weak-texture underground garage scenarios present major challenges. Annotations include not just parking lines but also precise positions of pillars, walls, and other vehicles. During real vehicle collection, camera calibration precision is critical—small extrinsic errors amplify after BEV projection.

ADAS System Data Requirements and Annotation

As features grow, data needs surge, evolving from single sensors toward multi-modal fusion. Common data types include images, point clouds, sensor fusion, and 4D-BEV.

Camera Image Data

Cameras are ADAS systems' most fundamental sensors, with different functions requiring different image data and annotations.

FCW and AEB primarily use 2D bounding boxes to annotate forward vehicles. Real projects employ multi-focal camera combinations. Telephoto cameras' small targets need finer annotation, ensuring accurate boxing even for vehicles of just dozens of pixels.

LDW and LKA functions require precise lane line annotation, typically using polylines or cubic curve fitting. Annotations include not just pixel positions but also attributes like line type (solid, dashed) and color (white, yellow). At complex intersections, polygon annotation marks guide zones and drivable area boundaries.

Drowsiness detection relies on many keypoints to precisely describe eye and mouth states.

APA function image annotation is most complex, requiring semantic segmentation of parking spots, drivable areas, and obstacles. Masks must be pixel-precise, especially parking spot corner annotations that directly impact parking accuracy. Ground markings and speed bumps also need instance segmentation to distinguish different marking segments.

LiDAR Point Cloud Data

LiDAR point clouds are increasingly common in high-end ADAS systems, with precise depth information significantly enhancing perception performance. LiDAR provides accurate 3D information. 3D annotation primarily uses 3D bounding boxes (3D cuboids) containing objects' 7 degrees of freedom (position xyz, dimensions lwh, orientation yaw).

Unlike 2D bounding boxes, point cloud 3D box annotation requires adjustment from multiple viewpoints to ensure tight object contour fitting. The workload is enormous. A typical approach uses model pre-annotation followed by manual correction.

ACC functions leverage LiDAR's precise ranging capability, but small targets like motorcycles have sparse point clouds at distance, requiring temporal accumulation to assist annotation.

Static scene annotation typically uses 3D semantic segmentation, classifying each point as road, sidewalk, building, vegetation, etc.

Curbs matter for APA functions, requiring 3D polygon outlining of contours. Pole-like objects such as streetlights and traffic sign posts use cylindrical parameterized annotation, including bottom center point and height.

A point cloud annotation technique leverages intensity information—lane lines are more visible in intensity images, assisting annotation. Accurate ground point segmentation forms the foundation for subsequent annotation, typically using RANSAC or more complex ground models.

Sensor Fusion Data

Camera–LiDAR fusion brings additional annotation requirements, ensuring ID consistency for the same target across different modalities. The same object must align between 2D image bounding boxes and 3D cuboids. Tools should support synchronized multi-view displays.

For FCW, ensure visual vehicles match radar/LiDAR returns, handling FoV mismatches. In surround systems, maintain cross-camera ID continuity for BSW as targets pass between views. Many teams label in a unified BEV frame, then back-project to each camera.


Sensor Fusion Data (Visualized on BasicAI Data Annotation Platform)

Temporal Data

Temporal data annotation distinguishes ADAS from general vision tasks. The core is maintaining target ID continuity, crucial for object tracking and trajectory prediction.

ACC functions need complete vehicle trajectories to train prediction models. Annotation must not only box each frame's position but also ensure correct ID association for vehicles reappearing after occlusion. Practice typically uses semi-automatic annotation, first generating initial trajectories with tracking algorithms, then manually correcting ID switches and occlusion handling.

LKA functions rely on stable lane line tracking. When lane lines appear intermittently due to wear, context must determine if they're the same line. This "virtual continuation" annotation strategy improves system stability. Annotations must also include lane line lifecycles, marking when they appear, disappear, or change type.

Behavior prediction requires future ground truth, often 3–5 seconds of waypoints, polyline or parametric curves. Label lane-change and turn intent as auxiliary supervision.

4D-BEV Representation

4D-BEV (3D space + time) is becoming mainstream, especially in integrated functions like TJA/HWA. This representation requires building spatiotemporally consistent occupancy grids. Annotation is no longer simple bounding boxes but occupancy probability and semantic category for each grid cell.

Practice typically combines LiDAR point clouds and multi-camera images for annotation, first generating initial occupancy maps with LiDAR, then refining with image semantic information. Dynamic objects' velocity vectors also need annotation, automatically calculated from tracking trajectories but requiring manual verification at keyframes.

Finding the Right ADAS Data Annotation Service Provider

High-quality data annotations set the ceiling for model performance.

With the shift to multimodal fusion and 4D-BEV, labeling complexity rises sharply. When selecting annotation services, key considerations include: multi-sensor fusion annotation capabilities, professional tools for ADAS scenarios, automatic pre-annotation efficiency, and team expertise with QA systems.

BasicAI focuses on autonomous driving data annotation, with in-house tools supporting rich multimodal workflows and industry-leading point cloud labeling. Its auto-annotation is tuned for ADAS and autonomous driving, multiplying efficiency.

BasicAI employs professional in-house teams ensuring 99%+ accuracy, with efficient project management delivering competitive pricing. Having served multiple Fortune 500 companies and top autonomous driving teams, BasicAI is an ideal partner for ADAS data annotation.

If this post helped, or you’re exploring ADAS annotation partnerships, we'd love to discuss your projects.


Data Annotation Services for ADAS and Self-Driving



Get Project Estimates
Get a Quote Today

Get Essential Training Data
for Your AI Model Today.

bottom of page