Credit: BecomingHuman
The Perception Problem
One of the key things which an autonomous vehicle needs to understand, is the ability to perceive the surrounding environment around itself, so that it can take informed decisions and plan its path accordingly.
That is the perception problem.
If you think about it, the self-driving car needs to accurately detect and track the objects around itself in all scenarios. It has to work perfectly in the glaring afternoon sun, as well as in pitch darkness. Even when it is snowing, or raining, or foggy — the perception systems need to work impeccably.
Autonomous Vehicles have usually have a slew of sensors attached with them. The most common ones are:
LiDAR Sensors
The LiDAR sensor fires rapid pulses of laser light, sometimes at up to 150,000 pulses per second. A sensor on the instrument measures the amount of time it takes for each pulse to bounce back — which is used to create 3D models and maps of objects and environments
The fast spinning bob you see on top of most of the self-driving cars is the LiDAR device. It is by far the most expensive sensor mounted on self driving cars.
A great video on how LiDAR works: https://www.youtube.com/watch?v=EYbhNSUnIdU
For a more in depth reading, you can visit: https://arstechnica.com/cars/2019/02/the-ars-technica-guide-to-the-lidar-industry/
Cameras
Autonomous vehicles usually have a suite of cameras to provide visibility of the environment around itself. Wide angle cameras provide broad visibility around the car, whereas narrow cameras provide a focused, long-range view of distant objects.
Radar Sensors
The RADAR system works in much the same way as the LiDAR, with the only difference being that it uses radio waves instead of laser. Since, radio waves have less absorption (due to large wavelength), it can detect objects through fog, dust, rain, snow. However, because of large wavelength, precision of objects detected from RADARs is quite low. Although, they play an important role in detecting and responding to motion of forward objects.
Ultrasonic Sensors
Some cars also use ultrasonic sensors for close-range work. They help detect nearby cars in dense places, and also provide guidance when parking.
All these sensors have different input types associated with them.
An average self-driving car with all the sensors captures data at the rate of 1 GB per second. All this data is fed into the perception models which then detect and track objects around the car.
This is where training data comes into play. The larger, the better, and more diverse dataset the perception models are trained on, better the vehicle is able to detect and track the objects, and subsequently, better is the performance on road.
Annotation is a critical part of the self-driving cycle. For every one hour driven, it takes approximately 800 human hours to label it.
– Carol Reiley (Cofounder, Drive.ai)
Credit: BecomingHuman By: Playment