Autonomous Cars, LiDAR, and Artificial Intelligence, Oh, My! A Look at CES 2017 – Product Design & Development

March 5, 2017 Facebook Twitter LinkedIn Google+ Uncategorized

The Consumer Electronics Show, one of the world’s tentpole technology shows, is a flashy vehicle for the engineering underneath. In the world of chipsets, the show was dominated by Nvidia, one of many companies seeking to power the artificial intelligence in next-generation cars and image processors. LG also wowed with its 4k displays, including the “Wallpaper” TV screen. Many component makers pushed their products for use in display screens, whether those screens were headed for the living room or an SUV. There were also plenty of opportunities for component manufacturers to find places in the flashy new cars and fleets that dominated some of the show floors.

Chipmakers and sensor manufacturers are also keeping an eye on what customers want in the automotive space. Just seven states – Nevada, California, Florida, Michigan, Hawaii, Washington, and Tennessee — and the District of Columbia have passed bills related to autonomous driving. Although these laws determine whether or not an autonomous car can operate in the state, a lot of details are only sketched in, including the capabilities required of the sensors. Manufacturers in this space have to contend with customers’ changing needs as a standard is worked out.

Here are some of the standout companies featured at CES: 


Velodyne’s LiDAR sensors are considered to be the top-of-the-line when it comes to autonomous cars. Their HDL-32E LiDAR offers an industry-leading 360 degree field of view, while the VLP-16 Puck is designed for mass production of 360 degree view sensors in a smaller, $8,000 package. They are working on lowering this price, but the sensor is already being snapped up by automakers. At CES, Ford announced their Fusion Hybrid concept car will have two of these on the A pillars. The Puck offers a range of 100 meters and a ± 15° vertical field of vision. While it’s expensive, its small size, at just 830 grams, makes it attractive.


Osram is another company getting in on the ground floor with autonomous vehicle sensors. Focused on lamps and lighting products, Osram also makes a four-channel laser for LiDAR systems. They have partnered up with laser scanning company Innoluce, an offshoot of Infineon, to demonstrate how very short laser pulses can be used to create a high-resolution map of the car’s environment. A short pulse length (less than 5 nanoseconds) means the laser light produces a high-resolution image while also meeting deferral standards for eye safety. This is the first time a LiDAR sensor will be based on micro-electromechanical systems (MEMS) instead of mechanical redirection. The controller and MEMS mirror will be packaged together, but this system will not be available from Innoluce until about 2020. Osram showed a sample laser package at CES, as well as using virtual reality to give visitors a tour of their other offerings in the lighting space.


Nvidia also made a lot of noise in the autonomous car space at CES. Along with demonstrating consumer-oriented products, the graphics processing unit maker also placed itself at the core of autonomous vehicles. Nvidia has partnered up with Mercedes Benz and Audi to show its DRIVE PX-2 AI self-driving computer, which uses DriveWorks software running a neural net to negotiate unexpected hazards and unfamiliar territory. The star of the show was “BB8,” a Lincoln Town Car that responds to spoken commands and was demonstrated outside the convention center. Inside, Nvidia used an Audi Q7 to demonstrate more AI-piloted driving.

The DRIVE PX 2 computing platform offers deep neural networks, location services and two discrete GPUs, all while consuming just 10 watts of power in certain single-processor configurations. It’s also being used in the Tesla Model S. Automotive technologies companies such as LG, ON Semiconductor, and TomTom all use DRIVE PX 2 in a variety of capacities, including ON’s cameras and LG’s cockpit displays. These deep neural networks can be used both in the car itself, where they process information from the cameras and sensors, and inside a data center, from which information can be sent to the car.

NVidia’s Tesla chips are designed to handle that data center side of the operations. Their GPU accelerators decrease deep learning training time and enable a data center to handle more information – important when deep learning networks may be performing 24 trillion operations per second.