Self-driving cars, or autonomous vehicles, primarily depend on sensor fusion to perceive their surroundings by combining inputs from advanced vehicle sensors, including cameras, lidars, radars, etc. The industry standard is 3D sensor fusion, combining real-time 3D spatial data from multiple sensors deployed on autonomous vehicles. While 3D data successfully captures scene geometry per frame, driving environments are dynamic across time that static 3D data fails to fully model.
4D-BEV Fusion takes it to the next level by introducing the time sequence, the fourth Dimension!
Unlike 3D fusion focused on the visible, 4D-BEV builds an omnidirectional, top-down view of the surroundings across space and time, crucial for autonomous driving algorithms. By correlating temporal frames, it uncovers hidden facets like paths near blind spots. Its key edge over 3D fusion is the ability to predict the near future by inferring the motion of currently occluded objects. This predictive capacity allows self-driving cars to plan ahead and achieve smoother, safer maneuvers. Without robust 4D ground truth annotation, ML models can't reliably decipher complex real-world moving scenes critical for autonomous navigation.
To boost this innovation, we've launched an all-new 4D-BEV fusion annotation toolkit on BasicAI Cloud - enabling leading-edge AI teams to construct expansive, high-quality 4D-BEV ground truths faster and more economically to train and evaluate perception models.
1. All-New 4D-BEV Fusion Data Annotation Tool
BasicAI Cloud's renowned 3D sensor fusion annotation tools now support annotating 4D-BEV fusion data out-of-the-box, including efficient model-assisted 3D box creation for autonomous driving applications.
Adjust the point cloud, annotate ROIs (Regions of Interest), and corresponding 2D frames automatically generate boxes mapping objects across time, a key feature for dynamic object tracking in autonomous driving. Click the play button above the timeline, and you'll see the annotation boxes in the 2D image sequence change positions over time following the object motion. That's all it takes!
Use this to annotate static objects (vehicles, lane lines, traffic signs, etc.) in 4D-BEV fusion data and create tremendous volumes of perception training and evaluation data cost-effectively on BasicAI Cloud for autonomous systems. The ground truths can also help construct large-scale scene datasets with both static and dynamic items to improve the performances of various self-driving modules – perception, localization, planning, and control. Moreover, 4D annotation enables full pipeline testing from single modules to end-to-end systems by recreating simulation testbeds - critical for validating the safety of driverless systems.
2. Other Annotation Tool Updates
Additional updates shipped across point cloud, image, and audio & video tools.
2.1 Point Cloud Annotation Tool: Group Tracking
For point cloud data annotation cases like trailers, long trucks turning, etc., the cab and body usually need separated boxes but associated IDs and tracking support. The association can be addressed by “Group” annotation, now with tracking enabled.
We also introduced Sensor Distance showing object distance from the sensor origin for each point cloud annotation. And optimized Copy function between frames - all properties from the latest frame are carried over alongside the box.
2.2 Image Labeling Tool: PDF Data Support
We have redesigned the skeleton annotation to be more intuitive – Nodes and Edges panels are added for quick Edge creation, batch additions following Node order, or custom selections via Node indices input. We also introduced a color picker and anti-misclick measures. For the polygon tool, we enabled sharing sides between polygons by point-sharing.
For OCR projects using PDF source data, our image dataset now supports direct PDF uploads and annotation - removing additional conversion steps.
2.3 Ontology Features
Copying Ontologies can be time-consuming. No more endlessly scrolling to find the right Classes to copy – our new search bar will help.
Creating new classes also gets easier with widened color palette previews. Pick distinctive hues from the start instead of manually tweaking repetitive defaults. But duplicate colors are still optionally allowed if desired.
Numbers are often directly used to represent Classes in models. Hence, we have optimized to allow custom number input when creating Classes. Copy, import, export, etc will all sync accordingly or duplicate check. Now your Ontology and models can speak the same language!
3. Collaboration System
Sampling accuracy and miss rate now support statistics by both result and label dimensions, which helps assess label quality and miss rates for specific labels separately from overall accuracy in acceptance checks. There is also an option for users to decide whether to include the result in accuracy and performance calculations.
Our updated skeleton annotation analysis spots mislabeled points precisely without skewing overall image-level grading.
Lastly, suspended or released annotation tasks will not contribute to submitted performance statistics anymore. Keep metrics focused on active work.
4. Other Enhancements
Several UX and bug fixes include: Highlights for pending reviews in the To-Do List, New timeline icons signaling commented frames, Fine-grained display controls for skeleton point annotations – index, attributes, label info, etc., Fixed bugs like pending data wiped when tapping "More" in the To-Do List.
Reliable autonomy starts with quality data. With this release, BasicAI Cloud looks to fuel safer self-driving by equipping developers with upgraded tools to efficiently annotate rich 4D perception and collaboration data at scale. We're excited to accelerate the future hand-in-hand with our pioneering partners on BasicAI Cloud.
Stay tuned for more groundbreaking features, and thank you for being part of our journey!