Three Pillars Enable Autonomous Drive

Sensors Insights by Will Tu

Autonomous Vehicle movement has been steadily gaining market momentum.  Sense technologies are playing an integral role enabling Advanced Driver Assistance System (ADAS) features which are becoming the building blocks to Autonomous diving capabilities. 

As sensors like Lidar, Radar, and Cameras become more cost effective, vehicle OEMS are offering more (ADAS) features in the market. Driver assistance features are just the beginning. Vehicle OEMs have pledged huge investments and triggered M&A activities centered around autonomous vehicle movement.  Clearly automotive electronics have shifted from the traditional real-time command and control functions that utilize an array of low-cost, resource-limited microcontrollers. 

Now it is all about high-performance computing platforms that use heterogeneous-compute CPUs, GPUs, DSPs, and ISPs to help fuse data from an array of sense elements such as radars, lidars, ultrasonic, and cameras, or external sources, including satellites, for creating vehicle-to-vehicle and vehicle-to-infrastructure connections. 

FREE SENSORS NEWSLETTER

Like this story? Subscribe to Sensors Online!

Sensors delivers a suite of newsletters, each serving as an information resource to help engineers and engineering professionals make efficient design and business decisions. Sign up to get news and updates delivered to your inbox and read on the go.

There are three key technology pillars supporting the enablement of autonomous driving: data, processing, and software.

The three supporting pillars of autonomous driving.
The three supporting pillars of autonomous driving.

 

Everything starts with data.  Several car manufacturers have heavily focused on vision-enabled systems, leveraging the cost effectiveness of image sensors thanks in part to the consumer electronics.  These sensors use machine vision, to provide 50 to 60 frames per second to closely approximate real time performance. 

It seems that many of manufacturers are growing uncomfortable with solely depending on vision systems due to the problems associated with such factors as dirt/mud, heavy rain or snow, glaring sun, and reflections off water or pavement.  Hence the addition of other sense elements such as Radar or Lidar.  Even these technologies have their negatives, largely cost and the fact that they cannot read signage. This also brings in other data such as mapping or data from vehicle to infrastructure or other vehicles. 

It makes sense to add more sensors and fuse them together with other external data and with maps.  The benefits are a great level of contextual awareness of the vehicle to its environment and an extended informational horizon as a vehicle can now see beyond itself through the “vision” of other cars or infrastructure.

The cost of data through additional sensors and V2X capabilities are not trivial, it represents a significant step function in the cost of future vehicles. Costs are also increasing when one attempts to address processing all that data.  Semiconductor makers are challenged to offer the most processing power and deal with the thermal issues associated with the high clock rates these processors are running. 

When it comes to pure performance the semiconductor companies have different engines to which they can select: Central Processing Unit (CPU), Graphics processing Unit (GPU), Digital Signal Processing (DSP) or Image Signal Processor (ISP).  Today, the semiconductor industry is rushing to create complex heterogeneous-compute platforms to help meet the processing demand.  ARM, Cadence, Ceva, Intel, Mediatek, Nvidia, NXP, Qualcomm, Renesas, Texas Instruments, Xilinx, and others are vying to get a piece of the autonomous drive market.

Besides the BOM costs of Sensor, Processing, and Communications there is the hidden cost of software development plus the associated tools and techniques that are going into this.  Machine Learning and Software is a challenge that has given rise to several start-ups.  Traditional means to develop command and control code are not sufficient to machine learning.  This is coupled to the fact that the ability to leverage the “cloud” while implementing AI is not realistic due to latency issues with today’s technology.  

Despite all the challenges – the industry is plowing full steam ahead.  These topics will be explored in more depth at the Sensor Expo 2017 automotive workshop in a series of panel discussions on:

  1. Processing, CPUs & Analysis
  2. Machine Learning and Software
  3. Sense Technologies

 

Figure 2

 

To register Click Here and use Discount Code: DSP100 to get $100 off OR a free expo pass.

 

About the Author

Willard Tu is Vice-President of Sale and Marketing for DSP Concepts. He is responsible for the planning, development, and implementation of the company’s marketing strategies, marketing communications, public relations activities and business development efforts.  Most recently, he served as Director of Embedded Segment Marketing for ARM, a leading company in semiconductor IP.  At ARM he was a technology evangelist and market maker who expanded ARM’s market share in the deeply embedded applications by working with a wide range of 3rd party IP companies and OEM designers.