Processor Drives On-Device AI Applications

Cadence Design Systems’ Tensilica DNA 100 Processor IP is deemed the first deep neural-network accelerator (DNA) AI processor IP to deliver both high performance and power efficiency across a full range of compute from 0.5 TeraMAC (TMAC) to 100s of TMACs. As a result, the device is suitable for on-device neural network inference applications spanning autonomous vehicles (AVs), ADAS, surveillance, robotics, drones, augmented reality (AR)/virtual reality (VR), smartphones, smart home, and IoT. According to Cadence, the DNA 100 processor delivers up to 4.7X better performance and up to 2.3X more performance per watt compared to other solutions with similar multiplier-accumulator (MAC) array sizes.

 

Neural networks are characterized by inherent sparsity for both weights and activations, causing MACs in other processors to be consumed unnecessarily through loading and multiplying zeros. The DNA 100 processor’s specialized hardware compute engine eliminates both tasks, allowing this sparsity to be leveraged for power efficiency and compute reduction. Retraining of neural networks helps increase the sparsity in the networks and achieve maximum performance from the DNA 100 processor’s sparse compute engine. This enables the DNA 100 processor to maximize throughput with a smaller array, as evidenced by its ability to achieve up to 2,550 frames per second (fps) and up to 3.4TMACs/W (in 16 nm) of estimated on-device inference performance on ResNet 50 for a 4K MAC configuration.

FREE SENSORS NEWSLETTER

Like this story? Subscribe to Sensors Online!

Sensors delivers a suite of newsletters, each serving as an information resource to help engineers and engineering professionals make efficient design and business decisions. Sign up to get news and updates delivered to your inbox and read on the go.

 

The DNA 100 processor comes equipped with a complete AI software platform. Compatibility with the latest version of the Tensilica Neural Network Compiler enables support for advanced AI frameworks including Caffe, TensorFlow, TensorFlow Lite, and a broad spectrum of neural networks including convolution and recurrent networks.

 

The processor can run all neural network layers, including convolution, fully connected, LSTM, LRN, and pooling. A single DNA 100 processor can easily scale from 0.5 to 12 effective TMACs, and multiple DNA 100s can be stacked to achieve 100s of TMACs for use in compute-intensive on-device neural network applications.

 

The DNA 100 processor also incorporates a Tensilica DSP to accommodate any new neural network layer not currently supported by the hardware engines inside the DNA 100 processor, while also offering the extensibility and programmability of a Tensilica Xtensa core using Tensilica Instruction Extension (TIE) instructions. Because the DNA 100 processor has its own direct memory access (DMA), it can also run other control code without the need for a separate controller.

 

The DNA 100 processor will be available to select customers in December 2018 with general availability expected in the first quarter of 2019. For more details, peruse the DNA 100 datasheet.