At present, the key technologies of auto-driving include environmental awareness, precise positioning, decision-making and planning, control and execution, high precision mapping and vehicle networking V2X, driving vehicle test, and verification technology.
With the support of this technology system and key hardware and software equipment, the self-driving vehicle can perceive the surrounding environment through sensors such as vehicle cameras, lidar, millimeter-wave radar, and ultrasonic sensors, real-time monitoring the changes of the surrounding environment, and make decisions based on the information obtained, thus forming a safe and reasonable path planning.
After planning the path, the vehicle execution system will control the vehicle to complete driving along with the planned one.
This set of core technology systems of autopilot can be simply summarized as “perception, decision-making, and implementation”.
Among them, sensors undertake sensing work.
The sensing system is also called the “middle control system”, which is responsible for sensing the surrounding environment, and collecting and processing the environmental information and the information inside the vehicle, mainly involving road boundary monitoring, vehicle detection, pedestrian detection, and other technologies.
To realize automatic driving, one problem should be solved at first: driving safety.
In order to ensure that the autonomous vehicle can make the correct decisions in various scenarios, it is necessary to realize real-time dynamic collection and identification of the surrounding environmental data, including but not limited to the status of the vehicle, traffic flow, road condition, traffic sign, etc.,
In other words, environmental perception plays a similar role as human driver’s eyes and ears.
In order to meet the needs of environmental perception, autopilot cars are equipped with many vehicle sensors, such as cameras, lidar, millimeter-wave radar, ultrasonic, and so on. Under the cooperation of these sensors and V2X, multi-source information such as traffic environment and vehicle status can be obtained in real-time, providing support services for decision making.
At present, there are two technical routes for environmental awareness technology, one is a multi-sensor fusion scheme by the camera, and the typical representative is Tesla.
More info: What is Tesla’s Leading Edge in Autonomous Driving?
The other is a technical solution with lidar as the leading, other sensors as the auxiliary. Typical enterprise representatives such as Google and Baidu.
More info: Baidu disclosed for the first time that the cost price of the Apollo moon is 480,000 RMB（$74,160)
3D point cloud image labeled data is the basic training data of driverless technology.
3D point cloud annotation is considered the most appropriate for precise detection through Lidar sensors.
3D point cloud image annotation is to label the target object in a 3D image collected by lidar sensors, using 3D boxing. The target includes vehicles, pedestrians, traffic signs and trees, etc.
In the field of AI automatic driving, accurate environmental perception and precise positioning are the keys to reliable navigation, informed decision making, and safe driving in complex dynamic environments. These two tasks need to acquire and process highly accurate and informative data in a real environment.
To obtain this data, sensors, such as lidar or cameras, are usually equipped for unmanned vehicles or mobile measuring vehicles. Traditionally, the image data captured by the camera can provide two-dimensional semantic information. Because of its low cost and high efficiency, 2D data is the most common one in the perception.
However, the 2D image data lacks 3D geographic information. Therefore, the dense and accurate 3D geographic point cloud data collected by lidar is applied as well. In addition, lidar is not sensitive to changes in lighting conditions and can work during the day and night, even with strong light and shadow interruption, which is the advantage of 3D point cloud data.
ByteBridge is a human-powered and ML-powered data labeling tooling platform. We provide scalable, high-quality training data for ML/AI industry with flexible workflow.
Quality and Accuracy
- ML-assisted capacity can help reduce human errors by automatically pre-labeling
- The real-time QA and QC are integrated into the labeling workflow as the consensus mechanism is introduced to ensure accuracy.
- Consensus — Assign the same task to several workers, and the correct answer is the one that comes back from the majority output.
- All work results are completely screened and inspected by the machine and human workforce.
In this way, ByteBridge can affirm our data acceptance and accuracy rate is over 98%
- Progress preview: client can monitor the labeling progress in real-time on the dashboard
- Result preview: client can get the results in real-time on the dashboard
Real-time Outputs: client can get real-time output results through API. We support JSON, XML, CSV, etc., and we can provide customizable datatype to meet your needs.
ByteBridge self-developed 3D Point Cloud labeling, quality inspection tool, and pre-labeling functions can complete high-quality and high-precision 3D point cloud annotation for 2D-3D fusion or 3D images provided by different manufacturers and equipment, and provide one-station management service of labeling, QA, and QC.
More info: ByteBridge Launches World’s First Mobile 3D Point Cloud Data Labeling Service
3D Point Cloud Annotation Types:
- Sensor Fusion Cuboids: 12 categories include car, truck, heavy vehicle, two-wheeled vehicle, pedestrian, etc.
- Sensor Fusion Segmentation: obstacles classification, different types of lanes differentiation
- Sensor Fusion Cuboids Tracking
① Tracking the same object with the same ID, labeling the leaving state;
② Point clouds or time-aligned images could be provided, point clouds outputs only.
Advantages of 3D Point Cloud Annotation Service:
- Support 2D, 3D mapping, support multiple cameras
- Support large amount of data annotation
- Support continuous frame tracking
- Support the management mode of marking, quality inspection, and acceptance.
- Support Pre-identification
A collaboration of the human-work force and AI algorithms ensure a 50% lower price compared to the conventional market.
As the quality of the labeled dataset determines the success of the self-driving industry, cooperation with a reliable partner can help developers to overcome the data labeling challenges.
If you need data labeling and collection services, please have a look at bytebridge.io, the clear pricing is available.
Please feel free to contact us: email@example.com