Sensors Expo 2018: Lidar Critical For ADAS & AVs

Accurate LiDAR classification and segmentation is required for developing critical ADAS and autonomous-vehicle components. Mainly, its required for high definition mapping and developing perception and path/motion planning algorithms. This article will cover best practices for how to accurately annotate and benchmark your AV/ADAS models against LiDAR ground truth training data.

 

Autonomous Vehicle (AV) developers need vast amounts of accurately labeled images and point cloud scenes to train their artificial intelligence systems. Point clouds are generated by LiDARs which stands for Light Detection and Ranging. LiDARs use pulsed laser light to construct 3D representations of objects and terrain. Recently, interest in LiDAR has grown significantly, especially for generating high-definition maps that can semantically specify static objects in a map within 5-cm level accuracy.

FREE SENSORS NEWSLETTER

Like this story? Subscribe to Sensors Online!

Sensors delivers a suite of newsletters, each serving as an information resource to help engineers and engineering professionals make efficient design and business decisions. Sign up to get news and updates delivered to your inbox and read on the go.

 

For AV scientists to build advanced AI models, they are logging millions of miles in experimental cars. But once those vehicles collect camera feeds and point clouds to build high-resolution maps, trained humans must painstakingly label those images and point clouds by tagging every object. “For every hour driven, it’s approximately 800 human hours to label,” Carol Reiley, Board member of the AV technology startup Drive.ai, said last year. Human grunt work is no substitute for an elegant solution from a hyper-focused modular supplier.

 

This challenge is fundamentally different from any that exists in the traditional automotive industry, so there is no existing supplier base to solve it. That spells an opportunity for startups such as Deepen AI, which is using artificial intelligence to quickly and accurately label point cloud scenes and images that are currently tagged by hand. Deepen AI has no ambitions of being one of the full-stack AV developers. Instead, it is working on various tools to accelerate their work, just as Microsoft’s Visual Studio helps developers to build and debug software. In other words, Deepen AI is positioning itself for the AV industry’s modular future.

 

In his talk, Founder & CEO of Deepen AI; Mohammad Musa will discuss best practices for accurate LiDAR data classification and segmentation. There are many problems that face humans in understanding and working with LiDAR data. Most of the time, data processors don't have access to a camera feed to help in understand the LiDAR point cloud data which leaves a lot of room for guessing and mistakes. Through Deepen AI's years of experience working with various LiDAR data sets from leading LiDAR vendors, Tier 1 suppliers and OEMs, Mohammad will share how they have been able to help these companies increase safety of their autonomous systems by better utilizing LiDAR training data.

 

"To realize the great benefits of autonomy, we need to resolve all the bottlenecks preventing us from increasing the safety and reliability of autonomous systems much faster. At the current rate of development, we are wasting too many human cycles and unnecessary costs while still risking lives during the testing process. This is where Deepen AI is focusing on accelerating the autonomous system development process while helping customers ensure a high safety and reliability bar. That can only be done by using very smart tools that are specialized and dedicated for this problem domain" - Mohammad Musa

 

Mohammad Musa will be presenting LiDAR Training Data Best Practices on Tuesday, June 26 in Meeting Room 211AD.