Radar could help self-driving cars ‘see’ clearly in foggy conditions
Engineers in the US have developed a way to improve the imaging capability of existing radar sensors – a system that could enable self-driving cars to ‘see’ what’s up ahead regardless of the weather.
LiDAR, which works by bouncing laser beams off surrounding objects, can paint a high-resolution 3D picture on a clear day, but it cannot see in fog, dust, rain or snow. On the other hand, radar, which transmits radio waves, can see in all weather, but it only captures a partial picture of the road scene.
But new technology from the University of California (UC) San Diego improves how radars see and can accurately predict the shape and size of objects in the scene, according to the team behind the system.
“It’s a LiDAR-like radar,” said Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering. It’s an inexpensive approach to achieving bad weather perception in self-driving cars, he noted. “Fusing LiDAR and radar can also be done with our techniques, but radars are cheap. This way, we don’t need to use expensive LiDARs.”
The system consists of two radar sensors placed on the bonnet and spaced an average car’s width apart (1.5m). According to the researchers, having two radar sensors arranged this way is key as they enable the system to see more space and detail than a single radar sensor.
During test drives on clear days and nights, the system performed as well as a LiDAR sensor at determining the dimensions of cars moving in traffic. The researchers said its performance did not change in tests simulating foggy weather. The team ‘hid’ another vehicle using a fog machine and their system accurately predicted its 3D geometry. The LiDAR sensor essentially failed the test.
The reason radar traditionally suffers from poor imaging quality is because when radio waves are transmitted and bounced off objects, only a small fraction of signals ever gets reflected back to the sensor. As a result, vehicles, pedestrians, and other objects appear as a sparse set of points, experts have said.
“This is the problem with using a single radar for imaging. It receives just a few points to represent the scene, so the perception is poor. There can be other cars in the environment that you don’t see,” said Kshitiz Bansal, a computer science and engineering PhD student at UC San Diego. “So if a single radar is causing this blindness, a multi-radar setup will improve perception by increasing the number of points that are reflected back.”
The team found that spacing two radar sensors 1.5m apart on the bonnet of the car was the optimal arrangement. “By having two radars at different vantage points with an overlapping field of view, we create a region of high-resolution, with a high probability of detecting the objects that are present,” Bansal explained.
“There are currently no publicly available datasets with this kind of data, from multiple radars with an overlapping field of view,” Bharadia said. “We collected our own data and built our own dataset for training our algorithms and for testing.”
The dataset consists of 54,000 radar frames of driving scenes during the day and night in live traffic, and in simulated fog conditions. Future work will include collecting more data in the rain. To do this, the team will first need to build better protective covers for their hardware.
The team is now working with Toyota to fuse the new radar technology with cameras. The researchers say this could potentially replace LiDAR. “Radar alone cannot tell us the colour, make, or model of a car. These features are also important for improving perception in self-driving cars,” Bharadia said.