Bookmark and Share

"Fast Inference in Low Power Systems via CEVA's Deep Neural Network Solution," a Presentation from CEVA

Yair Siegel, Director of Strategic Marketing at CEVA, presents the "Fast Inference in Low Power Systems via CEVA’s Deep Neural Network Solution" tutorial at the May 2017 Embedded Vision Summit.

The emergence of state-of-the-art, real-time object detection solely based on convolutional neural networks has created new and complex challenges for embedded systems. Algorithms such as Faster R-CNN, YOLO and SSD are performance-intensive and require high data bandwidth, but embedded implementations have extreme resource limitations. Various processors and hardware accelerators offer potential solutions, but entail highly complex software development and achieving optimal performance with them requires an acute understanding of how to distribute the workload across the various processing units.

This presentation outlines the challenges of implementing high-precision, advanced neural networks for embedded vision, and explains how using CEVA’s automatic software tools in combination with a mix of processing units can achieve a power efficient, flexible and very fast time-to-market solution for inferencing in low-cost production systems.