fbpx

Embedded Vision Insights: December 19, 2017 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit Vision Tank

Are you an early-stage start-up company developing a new product or service incorporating or enabling computer vision? Do you want to raise awareness of your company and products with vision industry experts, investors and entrepreneurs? Want a chance to win $5,000 in cash plus membership in the Embedded Vision Alliance? If so, apply for a chance to compete in the Vision Tank, part of the Embedded Vision Summit, which will take place May 22-24, 2018 in Santa Clara, California. The Vision Tank is the Embedded Vision Summit's annual start-up competition, showcasing the best new ventures using computer vision in their products or services. Also, register for next year's Embedded Vision Summit while Super Early Bird discount rates are still available, using discount code NLEVI1219.

Google will deliver the free webinar "An Introduction to Developing Vision Applications Using Deep Learning and Google's TensorFlow Framework" on January 17, 2018 at 9 am Pacific Time, in partnership with the Embedded Vision Alliance. The webinar will be presented by Pete Warden, research engineer and technology lead on the mobile and embedded TensorFlow team. It will begin with an overview of deep learning and its use for computer vision tasks. Warden will then introduce Google's TensorFlow as a popular open source framework for deep learning development, training, and deployment, and provide an overview of the resources Google offers to enable you to kick-start your own deep learning project. He'll conclude with several design examples that showcase TensorFlow use and optimization on resource-constrained mobile and embedded devices. For more information and to register, see the event page.

CEVA's free webinar "Enabling Mass Market ADAS Applications Using Real-time Vision Systems" will take place on January 3, 2018 at 7 am Pacific Time. The webinar will be presented by Jeff VanWashenova, Director of Automotive Segment Marketing at CEVA, and Young-Jun Yoo, Director of Strategic Marketing at fellow Alliance member company Nextchip. Topics include the challenges of ADAS and vision-based autonomous driving; an overview of Nextchip's APACHE4 ADAS SoC and its use of the CEVA-XM4 imaging and computer vision processor IP for differentiation and performance; and application use cases for the APACHE4 and CEVA-XM4. For more information and to register, see the event page.

Happy holidays from the Embedded Vision Alliance! The next edition of Embedded Vision Insights will be published in mid-January.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

ADAS AND AUTONOMOUS VEHICLES

Implementing an Optimized CNN Traffic Sign Recognition SolutionAu-Zone Technologies and NXP Semiconductors
Now that the benefits of deep neural networks for image classification are well known, the challenge has shifted to applying these powerful techniques to build practical, cost effective solutions for commercial applications. In this presentation, Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, explain how their companies designed, implemented and deployed a real-world embedded traffic sign recognition solution on a heterogeneous processor. They show how they used the TensorFlow framework to train and optimize an efficient neural net classifier to execute within the constraints of a typical embedded processor, and how they designed a modular vision pipeline to support that classifier. They explain how they distributed the vision pipeline across the compute elements of the embedded SoC, present their methods for evaluating and optimizing each pipeline stage, and summarize the tradeoffs made in the final implementation. They also touch on solutions to some of the practical challenges of deploying CNN-based vision products, including support for multiple concurrent networks on a single device and efficiently managing remote network model updates.

How to Test and Validate an Automated Driving SystemMathWorks
Have you ever wondered how ADAS and autonomous driving systems are tested? Automated driving systems combine a diverse set of technologies and engineering skill sets from embedded vision to control systems. This technological diversity and complexity makes it especially challenging to test these systems. In this session, Avinash Nehemiah, Product Marketing Manager for Computer Vision at MathWorks, describes the main challenges engineers face in testing and validating autonomous cars and driver assistance systems, and uses case studies to share best practices used in the automotive industry.

VISION PROCESSING IP BLOCKS FOR SOCS

Designing Scalable Embedded Vision SoCs from Day 1Synopsys
Some of the most critical embedded vision design decisions are made early on and affect the design’s ultimate scalability. Will the processor architecture support the needed vision algorithms? Is the CNN flexible enough? Are the tools capable of supporting hardware-vs-software tradeoffs and architecture scalability? This presentation from Pierre Paulin, Director of R&D for Embedded Vision at Synopsys, discusses his company's DesignWare EV6x Embedded Vision Processors, which offer a combination of high-performance vision CPU cores and a CNN engine with high-productivity programming tools based on OpenCL C and OpenVX. The programmable and configurable EV6x processors support a broad range of embedded vision applications including ADAS, video surveillance, augmented reality, and SLAM. Using representative embedded vision applications, this presentation illustrates the flexibility, efficiency, and power/performance/area benefits of the EV6x processors. These applications also show the flexibility and high performance of the integrated CNN engine for low-power embedded systems.

Computer Vision on ARM: The Spirit Object Detection AcceleratorARM
In 2016, ARM released Spirit, a dedicated object detection accelerator, bringing industry-leading levels of power- and area-efficiency to computer vision workflows. In this talk, Tim Hartley, the company's Senior Product Manager in the Imaging and Vision Group, looks at the functionality and architecture of this unique accelerator and how it enables smart, at-the-edge cameras from IoT and mobile through to embedded devices and even up into the cloud. In addition to Spirit, the talk touches on ARM programmable solutions for vision, including a look ahead to how chip, system, algorithm and application developers will be able to leverage vision and machine learning software across all forms of processors, from CPUs to GPUs and beyond.

UPCOMING INDUSTRY EVENTS

CEVA Webinar – Enabling Mass Market ADAS Applications Using Real-time Vision Systems: January 3, 2018, 7:00 am PT

Consumer Electronics Show: January 9-12, 2018, Las Vegas, Nevada

Embedded Vision Alliance Webinar – An Introduction to Developing Vision Applications Using Deep Learning and Google's TensorFlow Framework: January 17, 2018, 9:00 am PT

Embedded World Exhibition and Conference: February 27-March 1, 2018, Nuremberg, Germany

Embedded Vision Summit: May 22-24, 2018, Santa Clara, California

More Events

FEATURED NEWS

Japan’s Komatsu Selects NVIDIA as Partner for Deploying AI to Create Safer, More Efficient Construction Sites

Basler Prepares to Present a New Camera Concept for Embedded Vision

Qualcomm Snapdragon 845 Mobile Platform Introduces New, Innovative Architectures for Artificial Intelligence and Immersion

The Khronos Group Releases Finalized SYCL 1.2.1

Videantis and ADASENS Partner to Address Explosive Growth in Intelligent Automotive Cameras

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top