We have discontinued our cloud-based data annotation platform since Oct 31st. Contact us for private deployment options.
In 1607, Galileo and his assistant attempted to measure the speed of light by standing on two distant mountaintops with lanterns and timers. They tried to observe the time difference in the disappearance of the light
Unsurprisingly, they failed. Light travels too fast for the human eye to perceive.
Fast-forward a few centuries, and we not only know the speed of light but can also measure the precise time difference between a light beam hitting an object and returning. This allows us to determine the distance from the emission point to the target object, which is the underlying principle of most LiDAR systems used in self-driving cars.
In this blog post, we'll explore LiDAR's characteristics, data types, algorithms, and, most importantly, how to annotate LiDAR data to create training datasets for various 3D algorithms.
What is LiDAR? How Does LiDAR Work?
LiDAR means Light Detection and Ranging. It is an advanced sensing technology that determines the position, velocity, and shape of a target by emitting laser beams and receiving the reflected light signals. It enables high-precision measurements and plays a crucial role in self-driving cars, drones, robots, environmental monitoring, and military applications.
Compared to other sensors, LiDAR offers several advantages:
LiDAR vs. Cameras: LiDAR provides high-precision distance and angular measurements, generating high-resolution 3D point clouds. Moreover, LiDAR can operate in both daylight and darkness, unaffected by lighting conditions, and works effectively even in environments lacking texture.
LiDAR vs. Millimeter Wave Radar: Although LiDAR may be limited in extreme weather conditions like sandstorms, its performance is generally superior to millimeter wave radar in rain and fog, especially in applications requiring high resolution and precise angular measurements.
LiDAR vs. Ultrasonic Sensors: LiDAR's ranging accuracy is far higher than ultrasonic sensors, which are more susceptible to multipath effects and variations in sound velocity. LiDAR's measurement error is typically in the centimeter range.
Understanding LiDAR Data
3D Point Cloud
The most direct output of LiDAR is a 3D point cloud, consisting of a series of three-dimensional coordinate points in space. Each point typically contains XYZ coordinate information and sometimes additional attributes such as intensity and color (in some advanced systems).
3D Point cloud data visually represents the 3D structure of the environment and serves as the foundation for 3D modeling and object recognition. It provides self-driving systems with precise obstacle locations, road profiles, and detailed surrounding information.
Consecutive Point Clouds
Consecutive point cloud frames refer to a series of point cloud data captured by LiDAR at continuous time points. These consecutive frames can be used to build models of dynamic environments.
Through time-series analysis, object motion can be tracked, and trajectories can be predicted, which is crucial for object tracking and scene understanding in autonomous driving.
Processing consecutive frame data requires considering temporal consistency to ensure the coherence of point cloud data over time.
Multi-Sensor Fusion Data
In autonomous technologies, LiDAR data is often fused with data from other sensors (e.g., camera images, millimeter-wave radar data, or infrared sensor data) to improve perception accuracy and robustness.
This fusion data combines the strengths of different sensors, such as the high-precision 3D information from LiDAR and the texture information from cameras, providing a more comprehensive understanding of the environment and enhancing decision-making accuracy.
4D-BEV Data
4D-BEV data is a special data representation that converts 3D point cloud data into a representation observed from a top-down (bird's eye view) perspective while considering the time dimension, hence the term "4D."
This data format is particularly suitable for autonomous driving because it intuitively displays the vehicle's surroundings, including roads, other vehicles, pedestrians, etc., and is easy to understand and process.
4D-BEV provides a real-time dynamic view of the vehicle's surroundings, which is highly useful for path planning and obstacle avoidance decisions.
From LiDAR Data to Machine Learning Algorithms
Point cloud data supports various algorithms and applications in multiple domains, such as:
3D Object Detection and Recognition: Algorithms like PointNet, and PointPillars utilize point cloud data for object localization and classification, widely used in obstacle detection for self-driving vehicles.
Point Segmentation: Semantic segmentation algorithms like PointSeg and SPG perform point-wise classification on point clouds, distinguishing ground, buildings, vegetation, etc., which is crucial for urban modeling and environmental understanding.
Behavior Analysis and Path Planning: In robots and drones, point cloud data is used for dynamic environment analysis, such as crowd density estimation and dynamic ground and lane detection, enabling intelligent path planning.
SLAM:Â Point cloud data is a key input for SLAMÂ algorithms like LOAMÂ (Lidar Odometry and Mapping) and LeGO-LOAM, which utilize point clouds for real-time localization and map construction, suitable for robot navigation.
3D Scenes Reconstruction:Â Point cloud data, such as point clouds generated from structured light or multi-view stereo matching, is used for 3D reconstruction. Combined with algorithms like MVSÂ (Multi-View Stereo) reconstruction, detailed 3D models can be created.
Point Cloud Feature Extraction and Matching: Feature descriptors like FPFHÂ and SHOT, along with matching algorithms based on these features, provide the foundation for robot visual localization and target recognition.
Mastering 3D LiDAR Annotation for ML Algorithms
Raw point cloud data cannot be directly used as AI training data for ML algorithms. It requires a series of processing steps, with data annotation being the most critical one.
What is LiDAR Annotation and Why is it Important?
Similar to image and text annotation, LiDAR point cloud data annotation involves manually or semi-automatically adding point cloud labels or categories to the 3D point cloud data collected from LiDAR sensors to identify specific objects, surfaces, or features.
Annotation tasks typically include defining object boundaries, classifying point clouds into different object types (e.g., vehicles, pedestrians, buildings), and tracking objects across consecutive frames.
Deep learning models require a large amount of annotated data for training to recognize and classify objects in point clouds. Without these annotations, models cannot learn the complexity of the environment, affecting their real-world performance.
LiDAR annotation helps systems understand unstructured environments, such as distinguishing navigable areas from obstacles, which is crucial for the safe navigation of self-driving vehicles in complex urban environments.
4 Essential LiDAR Annotation Types
In 3D LiDAR point cloud data annotation tasks, there are four common annotation types based on different application scenarios and requirements:
Object Detection Annotation:Â This focuses on identifying and localizing specific objects in the point cloud by drawing 3D bounding boxes and assigning category labels.
3D Object Tracking:Â In addition to object detection, object tracking annotation requires tracking the same object across consecutive point cloud frames and assigning unique IDs to enable continuous tracking.
3D Semantic Segmentation: This emphasizes the semantic division of each point in the point cloud, assigning clear categories to each part, such as ground, buildings, trees, etc.
Scene Classification Annotation:Â This involves classifying the entire point cloud scene based on its features and determining its category.
These four annotation types have wide applications in autonomous driving, robot navigation, 3D reconstruction, virtual reality, and other fields.
High-quality annotated data can effectively improve the performance and reliability of relevant algorithms.
5 LiDAR Annotation Tools
Selecting the right 3D annotation tool before executing annotations can ensure efficient and high-quality annotation. Here are five industry-recognized 3D point cloud annotation tools:
BasicAI Cloud
A powerful smart annotation tool that supports object detection, 3D semantic segmentation, object tracking, and point cloud classification tasks for 3D LiDAR Fusion data. The platform's optimized algorithms are particularly suitable for automatic annotation in autonomous driving scenarios. Combined with a robust collaborative annotation suite, it can greatly improve 3D annotation efficiency.
💡 Learn More: https://www.basic.ai/basicai-cloud-data-annotation-platform
Xtreme1
An open-source project that provides deep 3D LiDAR data annotation, organization, and ontology management tools. It supports 2D/3D object detection and 3D point cloud segmentation, suitable for small-scale AI teams or academic research teams to deploy locally and complete more complex projects.
💡 Learn More: https://github.com/xtreme1-io/xtreme1
point-cloud-annotation-tool
A lightweight open-source LiDAR labeling tool that supports visualization and 3D Cuboids annotation of point cloud data in the KITTI-bin format. One advantage is the ability to use threshold or plane detection for ground removal.
💡 Learn More: https://github.com/springzfx/point-cloud-annotation-tool
Supervise.ly
An online annotation platform focused on 3D point cloud annotation, generating .json files, suitable for remote team collaboration. The platform allows users to easily move perspectives within the 3D point cloud data, facilitating the identification of objects of interest.
💡 Learn More: https://supervisely.com/labeling-toolbox/3d-lidar-sensor-fusion/
Semantic Segmentation Editor
A web-based 3D point cloud labeling tool that supports target annotation in images (.jpg or .png) captured by ordinary cameras and point clouds (.pcd) generated by LiDAR. This tool was developed in the context of autonomous driving research and is suitable for annotating autonomous driving data.
💡 Learn More: https://github.com/Hitachi-Automotive-And-Industry-Lab/semantic-segmentation-editor
Techniques for Different Types of LiDAR Annotation
In this section, we'll walk you through four key LiDAR annotation tasks using the BasicAI Cloud*Â platform. You'll learn how to effectively prepare LiDAR training data for autonomous driving scenarios.
Annotating for 3D Object Detection
Object detection in 3D point clouds involves identifying and labeling specific objects with 3D cuboids, 3D polygons, or 3D polylines. You might use 3D boxes to mark obstacles, polygons for drivable areas, and polyline annotation for lane markings. Detailed class labels are often included too.
Ontology Building: To get started, create a set of ontologies on BasicAI Cloud* – a hierarchical label system with class, sub-class, and attributes. Here's an example set of labels of objects of interest:
Annotating: The annotation interface has a toolbar on the left, a 3D point cloud and object views in the center, and an ontology list on the right.
BasicAI Cloud*'s unique LiDAR data annotation tool lets you annotate 3D objects with just two diagonal clicks on an object. Select the appropriate label and attributes, and voila! One object was annotated.
Pro tip: The vehicle's orientation defaults to the direction from the first to the second click.
Smart Feature: For auto-labeling, click the brain icon to run built-in models that predict all supported objects in the scene. Just remember to manually fine-tune the results for full accuracy.
Annotating for 3D Object Tracking
Object tracking takes detection a step further by tracing the same object across consecutive point cloud frames. This is crucial for predicting dynamic object behavior and collision avoidance.
You may want to assign consistent identifiers to each object to ensure continuous, accurate tracking.
Ontology Building: The ontology setup is the same as for detection, using the "Cuboid" type.
Annotating: In the multi-frame annotation view, click the cog icon to configure tracking settings. BasicAI Cloud*Â offers three tracking methods:
Copy: Manually annotate the object in its starting frame, then copy the cuboid position to subsequent frames.
Interpolation: Annotate the object in the start and end frames, and let the system calculate its position in between.
Model: Manually annotate the starting frame, then let the built-in model compute and label the object's motion in later frames.
Point Cloud Segmentation
Segmentation is all about partitioning point clouds into regions with similar attributes, like ground, buildings, or vegetation. It helps algorithms understand the scene structure and distinguish foreground from background.
Ontology Building: To set up the ontology, follow the detection task steps but choose the "Segmentation" annotation type.
Annotating: In the annotation interface, switch to the "Segmentation" tab to access the segmentation tools. The Lasso Pen and Polygon are the most commonly used for drawing around groups of similar points.
Pro tip: Adjust the Height Range above the point cloud to filter points by elevation – Super handy for point cloud ground segmentation!
Smart Feature: You can also trigger auto point cloud segmentation with the brain icon to run BasicAI Cloud*'s built-in model.
3D Scene Classification
Classification assigns a single category label to the entire 3D scenes, like "urban" vs. "rural" or vehicle density ratings.
Ontology Building: Create your classification labels in the ontology center and switch to the "Classifications" tab in the annotation interface.
Annotating: On the far right of the annotation interface, you'll find three vertically arranged tabs. By default, "Results" is selected. Switch to the "Classifications" tab to see the classification labels we just created. Simply select the appropriate label for the whole scene.
Bonus: Multi-Sensor Annotation
For an even richer environment model, try combining 2D context with 3D data. Annotate objects in the 2D view and match them to their 3D counterparts, or vice versa.
Platforms like BasicAI Cloud*Â offer online sensor calibration. With 3D to 2D projection, they can automatically generate pseudo-3D and 2D bounding boxes in the image when you label a 3D box in the point cloud.
This fusion of depth and texture data is key to enhancing the perception capabilities of autonomous driving systems.
📖 Learn more about fusion annotation techniques in our previous blog post: link.
Orchestrating a Winning LiDAR Labeling Project
Running a large-scale 3D annotation project is a complex undertaking. You need effective standards and protocols, efficient workflows, rigorous quality control, and attentive progress monitoring.
Here are some essential strategies:
Define Crystal-Clear Annotation Guidelines
Before you begin, create detailed annotation guidelines tailored to your use case. Define target classes, identification criteria, tool usage rules, and more. Use plenty of example images to eliminate ambiguity.
Guidelines can evolve as the project progresses, but make sure to sync any updates with the whole annotation team.
Design a Smooth, Parallel Workflow
For large datasets, split the workload into batches and assign them to different annotators or teams. Include some overlap between batches for quality control, and have multiple people cross-check critical edge cases.
Look for point cloud annotation tools that let you customize the workflow to effectively collaborate with internal and external partners.
Implement Multilayered Quality Assurance
Targeted annotator training at the start of the project is essential to develop strong data comprehension and judgment skills.
Platforms like BasicAI Cloud*Â can perform real-time quality checks based on pre-defined rules to nip errors in the bud. Conduct manual spot checks or automated inspections to catch and correct issues early.
💻 Watch Video: Real-time and batch quality checks for ground truth annotations: link.
Monitor Progress with Real-Time Analytics
As a project manager, keep a close eye on annotation volume, quality, and per-annotator metrics. Quantify accuracy rates, false negative ratios, and other indicators to identify weaknesses and make targeted improvements.
Choose a platform with real-time task progress tracking, so you can get a pulse on individual and overall performance to quickly mitigate schedule risks.
Finally, don't forget to periodically review project milestones, pinpoint challenges, and fine-tune your project management approach.
With these techniques and best practices in your toolkit, you'll be well on your way to producing top-notch annotated LiDAR datasets for training high-performance autonomous models.
Real-World LiDAR Applications: 4 Inspiring Use Cases
Autonomous Vehicles
Self-driving cars are a marvel of integrated technologies. They rely on high-definition maps, real-time object localization, and obstacle avoidance, in which LiDAR plays a big role.
The algorithms behind autonomous driving can be divided into perception, decision-making, and planning/control stages.
In the perception stage, localization and object recognition take center stage. If LiDAR is the primary localization sensor, comparing its scans with HD maps can pinpoint the vehicle's current position. Even without maps, comparing current LiDAR scans with previous ones using the ICP algorithm can estimate the vehicle's location.
For object detection and tracking, LiDAR's point cloud data solves the challenge of judging object distances. By leveraging LiDAR's properties, self-driving cars can accurately estimate the distance, height, and even surface shape of reflective obstacles. This greatly improves detection accuracy with lower algorithmic complexity than camera-based approaches, meeting the real-time requirements of autonomous vehicles.
Robotics and Automation
Some humanoid robots use 3D vision systems powered by LiDARÂ to obtain precise distance information about their surroundings. This enables them to build real-time maps for autonomous navigation. Point cloud data helps robots understand the environment's layout, plan optimal paths, avoid obstacles, and ensure safe, efficient movement.
In automated production lines and precision assembly, LiDAR provides high-accuracy position information. This allows machines to accurately identify specific objects (like items on warehouse shelves) and perform assembly or welding operations, boosting productivity and quality.
Precision Agriculture
Agriculture, one of the most traditional and long-standing sectors, is benefiting from the opportunities brought by increasing automation. Just as LiDAR has turbocharged the development of autonomous vehicles, it's also proving to be a boon for agriculture.
Farms have less traffic density and complexity than roads, making autonomous agricultural vehicles easier to implement. Companies like John Deere and Farmwise are developing self-driving harvesters and tractors. Moreover, LiDAR is unaffected by dust, darkness, and fog — a major advantage for precision farming.
By accurately recording and evaluating parameters like soil conditions and yield, land cultivation (sowing and fertilizing) can be optimally adapted to various conditions.
But how are these parameters recorded? That's where LiDAR sensors come into play.
For example, LiDAR mounted on a tractor can precisely measure the height, volume, and quality of a cornfield to determine the expected yield. Monitoring feed areas is another potential application - LiDAR sensors can detect material levels for timely, automatic replenishment.
Smart Mining
For many, coal mining is synonymous with danger. To fundamentally reduce accidents and promote smart mine construction, LiDAR is used to monitor ground subsidence and slope stability, promptly detecting potential landslide or collapse risks. By continuously scanning and analyzing changes in point cloud data, early warnings can be issued to ensure operational safety.
In underground mines, LiDAR also provides real-time environmental awareness for unmanned transport vehicles, enabling safe navigation and collision avoidance.
Equally important is geological exploration. LiDAR can quickly obtain high-precision terrain point cloud data to create detailed topographic models. This is crucial for mineral resource exploration, mining planning, and post-mining, helping engineers accurately calculate earthwork volumes and optimize mining paths and sequences.
The Future of LiDAR: Trends and Opportunities
LiDAR technology is continuously evolving and maturing. Cost and performance optimizations are making it likely to become more prevalent across vehicle models.
In the future, LiDAR will trend towards miniaturization and integration, with solid-state LiDAR potentially becoming mainstream. Wavelength solutions may be optimized for different needs.
Accordingly, point cloud data annotation also faces innovation. AI-powered automated annotation tools will become smarter, and the use of synthetic data will increase to supplement real data.
In the autonomous driving field, LiDAR will remain a key sensor for ADAS and fully autonomous driving in the short term, especially in complex urban environments and extreme conditions. However, as pure vision solutions mature, the necessity of LiDAR in some application scenarios may decrease. Nevertheless, LiDAR will continue to have application prospects in specific areas such as logistics and mining in extreme environments.
* To further enhance data security, we discontinue the Cloud version of our data annotation platform since 31st October 2024. Please contact us for a customized private deployment plan that meets your data annotation goals while prioritizing data security.