Image Credits: metamorworks/shutterstock.com
Developing a car that can ‘see’ and make decisions better than humans is one of the biggest challenges for driverless cars; it must be capable of perceiving its surroundings and then understanding what it can see, before determining the next step.
Many companies are trying to build autonomous vehicles and the technology required for them to function. They all approach the engineering challenges differently, but the consensus is that driverless cars require three tools to help it mimic a human’s ability to see. The combination of information from a variety of sensors is utilized to create a detection system capable of ‘seeing’ the car’s environment better than human eyesight can.
Use of Lidar Technology for Autonomous Vehicles
Lidar is often touted as the key technology for driverless cars. The sensor functions by firing out pulses of laser light incredibly quickly, before measuring how long it takes for them to come back; the data it gathers is used to create a detailed and precise map of the surrounding environment.
The technology is well-suited to detecting moving objects and works by creating point clouds that represent the vehicle’s surroundings. It can provide shape and depth information, identifying cars and pedestrians as well as determining the road geography.
Lidar functions in all light conditions and can detect objects and obstacles that image-classifier algorithms might miss. However, it doesn’t match the resolution of a camera, providing only a general sense of shape rather than specific visuals. The technology is not yet well-established, and can still struggle in poor weather conditions, and with potholes. Furthermore, lidar is expensive, and its reliability is unproven; there have been several recently reported incidents in America involving driverless cars.
And lidar can not make decisions on the best course of action by itself, rather several streams of information are required to determine whether an object in the street is a trash can or a child crossing the road.
Diversity is the key – different sensors can provide different information that allows an autonomous car to make a decision.
Role of Cameras and Radar
Cameras are the most accurate means of creating a visual representation of the world; they are placed on every side of the vehicle to provide a 360° view. They allow driverless cars to see lane lines and road signs in high resolution. It can see in enough detail to know if a person on a bike is signaling for example, and can distinguish details of the surrounding environment. But it doesn’t function well in low visibility conditions, or determine distances well.
Radar can be used to supplement the information from a camera in low visibility conditions. Found behind the car’s sheet metal, it already underpins technology like adaptive cruise control and automatic emergency braking. It can determine the speed of objects it sees and how far away it is. However, it isn’t precise enough to determine ‘what’ it sees and so can’t distinguish between different types of vehicles.
Limitations of Cameras and Radar
While cameras and radar can provide a sufficient level of autonomy, it doesn’t cover all scenarios – which is where lidar comes in. The three combine all their information to provide the car with visuals of the surroundings and help it detect speed and distance of nearby objects as well as 3D shapes, helping the car’s computer identify what’s what.
Autonomous vehicles need to make sense of the constant flow of information, much like the brain has to make sense of all the visual data taken in by the eyes. They use a process called sensor fusion; the input is fed into a high performance, centralized artificial intelligence computer which combines all the relevant data streams for the car to make driving decisions. As a result, autonomous vehicles don’t have blind spots as they are constantly receiving information and are always aware of the moving and changing world around them.
Conclusion
So, while lidar technology might be considered key to autonomous vehicles, it can’t make decisions alone – it requires other technology like cameras and radar to create a complete picture of the surroundings. Once the car knows how to see, decision-making becomes easier – just don’t hit anything.
Sources and Further Reading
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.