Using High-level Synthesis to Bridge the Gap Between Deep Learning Frameworks and Custom Hardware Accelerators

Wednesday, May 22, 2:45 PM - 3:15 PM
Summit Track: 
Enabling Technologies
Location: 
Exhibit Hall ET 2

Recent years have seen an explosion in machine learning/AI algorithms with a corresponding need to use custom hardware for best performance and power efficiency. However, there is still a wide gap between algorithm creation and experimentation (using deep learning frameworks such as Tensorflow and Caffe) and custom hardware implementations in FPGAs or ASICs. High-level synthesis (HLS) using standard C++ as the design language can provide an automated path to custom hardware implementations by leveraging existing APIs available in deep learning frameworks (e.g., the Tensorflow Operator C++ API). Using these APIs can enable designers to easily plug their synthesizable C++ hardware models into deep learning frameworks to validate a given implementation. Designing using C++ and HLS not only provides the ability to quickly create AI hardware accelerators with the best power, performance and area (PPA) for a target application, but helps bridge the gap between software algorithms developed in deep learning frameworks and their corresponding hardware implementations.

Speaker(s):

Michael Fingeroff

HLS Technologist, Mentor

Michael Fingeroff has worked as an HLS Technologist for the Catapult High-Level Synthesis Platform at Mentor, A Siemens Business since 2002. His areas of interest include Machine Learning, DSP, and high-performance video hardware. Prior to working for Mentor, he worked as a hardware design engineer developing real-time broadband video systems. Mike Fingeroff received both his bachelor's and master's degrees in electrical engineering from Temple University in 1990 and 1995 respectively.

See you at the Summit! May 18-21 in Santa Clara, California!