How Machine Learning Improves Perception

Posted By
Rick Searcy
Advanced Radar Systems Manager

As vehicles become more automated, developers can use machine learning to train systems to identify objects and to better understand their environment with less data.

Machine learning is a subset of artificial intelligence that refers to a system’s ability to be trained through experience with different scenarios.

One challenge machine learning helps address with radar is edge detection. Radar’s longer wavelengths produce lower resolution that can lead to under-resolved scattering surfaces on objects, making it difficult to tell where an object’s edges are. When that happens, it becomes challenging to interpret the data and resolve the scene. Engineers are working on ways to improve the resolution of radar, such as moving up from the common 77 GHz frequency used in today’s automotive applications to 120 GHz or higher, with a corresponding reduction in wavelength. That allows a much higher resolution measurement for the same size sensor.

Even with today’s radars, however, machine learning can help to characterize different scenarios when the data is difficult to describe through standard algorithms.

Developers can present many examples of objects in a particular category to a machine learning system, and it can learn how signals are scattered by complex objects with many reflection points. It can take advantage of contextual information. And it can even learn from simultaneous data provided by cameras, lidars or HD maps to classify objects based on radar signals.

Further benefits are possible if we use machine learning judiciously. Instead of taking a brute-force approach and applying machine learning to all of the raw data provided by a radar, we can do some classical preprocessing and then apply machine learning just to those portions that make sense.

Many automotive radars utilize an array of antennas to measure angle. In classical radar signal processing, the digitized signals from each antenna are converted to range and speed. The signals are compared across the antenna array to make angle measurements. An example of preprocessing would be to use classical signal processing to isolate regions of interest, to focus on objects with certain ranges and speeds. The signals from each antenna with a common range and speed can then be used to train a system.

This kind of analysis gives the system a rich basis of information to feed into a neural network, which in turn can apply machine learning to produce an even clearer picture of the scene.

Without this interim step, an AI system would have to determine the scene from the raw digitized signals themselves in real time, which means it would need to be extremely powerful and therefore more expensive and resource-intensive, and it would require long training sequences to figure out what to make of the data. Plus, such a system would be difficult to troubleshoot – if the vehicle detected an object that was not there, for example, it could be difficult to figure out where the processing went wrong. Combining classical processing with machine learning can provide some orthogonality in the data processing, which increases the robustness of the system.


While the data provided by a radar is more complex than what comes in from vision systems – providing range and range rate in addition to location of objects – it is also quite valuable. It is well worth the effort to intelligently sift through the data to extract meaning. Aptiv’s 20-year history of working with automotive radar – we were the first to put a radar in a Jaguar in 1999 to enable adaptive cruise control – has given us the expertise needed to pull out the relevant data in the most efficient way.




Story Attachments (1)