Embedded Vision Alliance: Video Interviews & Demos

"OpenCV: Current Status and Future Plans," a Presentation from OpenCV.org

Bookmark and Share

"OpenCV: Current Status and Future Plans," a Presentation from OpenCV.org

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Satya Mallick, Interim CEO of OpenCV.org, presents the "OpenCV: Current Status and Future Plans" tutorial at the May 2019 Embedded Vision Summit.

With over two million downloads per week, OpenCV is the most popular open source computer vision library in the world. It implements over 2500 opt- imized algorithms, works on all major operating systems, is available in multiple languages and is free for commercial use.

This talk primarily provides a technical update on OpenCV: What’s new in OpenCV 4.0? What is the Graph API? Why are we so excited about the Deep Neural Network (DNN) module in OpenCV? (Short answer: It is one of the fastest inference engines on the CPU.)

Mallick also shares plans for the future of OpenCV, including new algorithms that the organization plans to add through the Google Summer of Code this year. And he briefly shares information on the new Open Source Vision Foundation (OSVF), on OpenCV’s sister organizations, CARLA and Open3D, and on some of the initiatives planned by these organizations.

"Improving the Safety and Performance of Automated Vehicles Through Precision Localization," a Presentation from VSI Labs

Bookmark and Share

"Improving the Safety and Performance of Automated Vehicles Through Precision Localization," a Presentation from VSI Labs

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Phil Magney, founder of VSI Labs, presents the "Improving the Safety and Performance of Automated Vehicles Through Precision Localization" tutorial at the May 2019 Embedded Vision Summit.

How does a self-driving car know where it is? Magney explains how autonomous vehicles localize themselves against their surroundings through the use of a variety of sensors along with precision maps. He explores the pros and cons of various localization methods and the trade-offs associated with each of them. He also examines the challenges of keeping mapping assets fresh and up-to-date through crowdsourcing.

"AI Reliability Against Adversarial Inputs," a Presentation from Intel

Bookmark and Share

"AI Reliability Against Adversarial Inputs," a Presentation from Intel

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Gokcen Cilingir, AI Software Architect, and Li Chen, Data Scientist and Research Scientist, both at Intel, presents the "AI Reliability Against Adversarial Inputs" tutorial at the May 2019 Embedded Vision Summit.

As artificial intelligence solutions are becoming ubiquitous, the security and reliability of AI algorithms is becoming an important consideration and a key differentiator for both solution providers and end users. AI solutions, especially those based on deep learning, are vulnerable to adversarial inputs, which can cause inconsistent and faulty system responses. Since adversarial inputs are intentionally designed to cause an AI solution to make mistakes, they are a form of security threat.

Although security-critical functions like login based on face, voice or fingerprint are the most obvious solutions requiring robustness against adversarial threats, many other AI solutions will also benefit from robustness against adversarial inputs, as this enables improved reliability and therefore enhanced user experience and trust. In this presentation, Cilingir and Chen explore selected adversarial machine learning techniques and principles from the point of view of enhancing the reliability of AI-based solutions.

"Distance Estimation Solutions for ADAS and Automated Driving," a Presentation from AImotive

Bookmark and Share

"Distance Estimation Solutions for ADAS and Automated Driving," a Presentation from AImotive

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Gergely Debreczeni, Chief Scientist at AImotive, presents the "Distance Estimation Solutions for ADAS and Automated Driving" tutorial at the May 2019 Embedded Vision Summit.

Distance estimation is at the heart of automotive driver assistance systems (ADAS) and automated driving (AD). Simply stated, safe operation of vehicles requires robust distance estimation. Many different types of sensors (camera, radar, LiDAR, sonar) can be used for distance estimation, and different distance estimation techniques can be used with each type of sensor. Each type of sensor and technique has unique strengths and weaknesses. Debreczeni examines these techniques and their strengths and weaknesses, and shows how multiple techniques using different sensor types can be fused to enable robust distance estimation for a specific automated driving application.

"Accelerate Adoption of AI at the Edge with Easy to Use, Low-power Programmable Solutions," a Presentation from Lattice Semiconductor

Bookmark and Share

"Accelerate Adoption of AI at the Edge with Easy to Use, Low-power Programmable Solutions," a Presentation from Lattice Semiconductor

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Hussein Osman, Consumer Segment Manager at Lattice Semiconductor, presents the "Accelerate Adoption of AI at the Edge with Easy to Use, Low-power Programmable Solutions" tutorial at the May 2019 Embedded Vision Summit.

In this talk, Osman shows why Lattice’s low-power FPGA devices, coupled with the sensAI software stack, are a compelling solution for implementation of sophisticated AI capabilities in edge devices. The latest release of the sensAI stack provides a performance increase of more than 10X compared with the previous release. This performance increase is driven by updates to the CNN IP and the neural network compiler tool, including a number of new features, such as support for 8-bit activation quantization and smart merging of layers.

For a seamless user experience, the new release expands the list of neural network topologies and machine learning frameworks supported, and automates the quantization and fraction settings processes. In addition, Lattice Semiconductor provides a comprehensive set of reference designs which include training datasets and scripts for popular machine learning frameworks, enabling easy customization. And, to speed time to market for popular use cases, the company provides full turnkey solutions for human counting, human presence detection and key phrase detection.

"MediaTek’s Approach for Edge Intelligence," a Presentation from MediaTek

Bookmark and Share

"MediaTek’s Approach for Edge Intelligence," a Presentation from MediaTek

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Bing Yu, Senior Technical Manager and Architect at MediaTek, presents the "MediaTek’s Approach for Edge Intelligence" tutorial at the May 2019 Embedded Vision Summit.

MediaTek has incorporated an AI processing unit (APU) alongside the traditional CPU and GPU in its SoC designs for the next wave of smart client devices (smartphones, cameras, appliances, cars, etc.). Edge applications can harness the CPU, GPU and APU together to achieve significantly higher performance with excellent efficiency.

In this talk, Yu presents MediaTek’s AI-enabled SoCs for smart client devices. He examines the features of the AI accelerator, which is the core building block of the APU. He also describes the accompanying toolkit, called NeuroPilot, which enables app developers to conveniently implement inference models using industry-standard frameworks.

"Can We Have Both Safety and Performance in AI for Autonomous Vehicles?," a Presentation from Codeplay Software

Bookmark and Share

"Can We Have Both Safety and Performance in AI for Autonomous Vehicles?," a Presentation from Codeplay Software

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Andrew Richards, CEO and Co-founder of Codeplay Software, presents the "Can We Have Both Safety and Performance in AI for Autonomous Vehicles?" tutorial at the May 2019 Embedded Vision Summit.

The need for ensuring safety in AI subsystems within autonomous vehicles is obvious. How to achieve it is not. Standard safety engineering tools are designed for software that runs on general-purpose CPUs. But AI algorithms require more performance than CPUs provide, and the specialized processors employed to achieve this performance are very difficult to qualify for safety.

How can we achieve the redundancy and very strict testing required to achieve safety, while also using specialized processors to achieve AI performance? How can ISO 26262 be applied to AI accelerators? How can standard automotive practices like coverage checking and MISRA coding guidelines be used?

Codeplay believes that safe autonomous vehicle AI subsystems are achievable, but only with cross-industry collaboration. In this presentation, Richards examines the challenges of implementing safe autonomous vehicle AI subsystems and explains the most promising approaches for overcoming these challenges, including leveraging standards bodies such as Khronos, MISRA and AUTOSAR.

"Memory-centric Hardware Acceleration for Machine Intelligence," a Presentation from Crossbar

Bookmark and Share

"Memory-centric Hardware Acceleration for Machine Intelligence," a Presentation from Crossbar

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Sylvain Dubois, Vice President of Business Development and Marketing at Crossbar, presents the "Memory-centric Hardware Acceleration for Machine Intelligence" tutorial at the May 2019 Embedded Vision Summit.

Even the most advanced AI chip architectures suffer from performance and energy efficiency limitations caused by the memory bottleneck between computing cores and data. Most state-of-the-art CPUs, GPUs, TPUs and other neural network hardware accelerators are limited by the latency, bandwidth and energy consumed to access data through multiple layers of power-hungry and expensive on-chip caches and external DRAMs. Near-memory computing, based on emerging nonvolatile memory technologies, enables a new range of performance and energy efficiency for machine intelligence.

In this presentation, Dubois introduces innovative and affordable near-memory processing architectures for computer vision and voice recognition, and presents architectural recommendations for edge computing and cloud servers. He also discusses how nonvolatile memory technologies, such as Crossbar Inc.’s ReRAM, can be directly integrated on-chip with dedicated processing cores, enabling new memory-centric computing architectures. The superior characteristics of ReRAM over legacy nonvolatile memory technologies help to address the performance and energy efficiency demands of machine intelligence at the edge and in the data center.

"DNN Challenges and Approaches for L4/L5 Autonomous Vehicles," a Presentation from Graphcore

Bookmark and Share

"DNN Challenges and Approaches for L4/L5 Autonomous Vehicles," a Presentation from Graphcore

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Tom Wilson, Vice President of Automotive at Graphcore, presents the "DNN Challenges and Approaches for L4/L5 Autonomous Vehicles" tutorial at the May 2019 Embedded Vision Summit.

The industry has made great strides in development of L4/L5 autonomous vehicles, but what’s available today falls far short of expectations set as recently as two to three years ago. To some extent, the industry is in a “we don’t know what we don’t know” state regarding the sensors and AI processing required for a reliable and practical L4/L5 solution.

Research on new types of DNNs for perception is advancing rapidly, and solutions for planning are in their infancy. In this talk, Wilson reviews important areas of uncertainty and surveys some of the DNN approaches under consideration for perception and planning. He also explores the compute challenges associated with these DNNs.

"Snapdragon Hybrid Computer Vision/Deep Learning Architecture for Imaging Applications," a Presentation from Qualcomm

Bookmark and Share

"Snapdragon Hybrid Computer Vision/Deep Learning Architecture for Imaging Applications," a Presentation from Qualcomm

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Robert Lay, Computer Vision and Camera Product Manager at Qualcomm, presents the "Snapdragon Hybrid Computer Vision/Deep Learning Architecture for Imaging Applications" tutorial at the May 2019 Embedded Vision Summit.

Advances in imaging quality and features are accelerating, thanks to hybrid approaches that combine classical computer vision and deep learning algorithms and that take advantage of the powerful heterogeneous computing capability of Qualcomm Snapdragon mobile platforms. Qualcomm’s dedicated computer vision hardware accelerator, combined with the flexible deep learning capabilities of the Qualcomm AI Engine, help developers meet performance goals for the most complex on-device vision use cases with lower power consumption and processor load. In this talk, Lay reviews the Snapdragon heterogeneous architecture and how Qualcomm's dedicated computer vision processor and tools accelerate the realization of high performance, efficient imaging and vision applications.