fbpx

Embedded Vision Insights: August 21, 2018 Edition

EVA180x100


VISION PROCESSING IN THE CLOUD

Introduction to Creating a Vision Solution in the CloudGumGum
A growing number of applications utilize cloud computing for execution of computer vision algorithms. In this presentation, Nishita Sant, Computer Vision Scientist at GumGum, introduces the basics of creating a cloud-based vision service, based on GumGum's experience developing and deploying a computer vision-based service for enterprises. Sant explores the architecture of a cloud-based computer vision solution in three parts: an API, computer vision modules (housing both algorithm and server), and computer vision features (complex pipelines built with modules). Execution in the cloud requires the API to handle a continuous, but unpredictable, stream of data from multiple sources and task the appropriate computer vision modules with work. These modules consist of an AWS Simple Queue Service and an EC2 auto-scaling group and are built to handle both images and video. Sant discusses in detail how these modules optimally utilize instance hardware for executing a computer vision algorithm. Further, she discusses GumGum's method of inter-process communication, which allows for the creation of complex computer vision pipelines that require several modules to be linked. Sant also addresses cost and run-time tradeoffs between GPU and CPU instances.

Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft AlgorithmsMicrosoft
Microsoft offers its state-of-the-art computer vision algorithms, used internally in several products, through the Cognitive Services cloud APIs. With best-in-class results on industry benchmarks, these APIs cover a broad spectrum of tasks spanning image classification, object detection, image captioning, face recognition, OCR and more. Beyond the cloud, these algorithms even run locally on mobile or edge devices, or via Docker containers, for real-time scenarios. In this talk, Anirudh Koul, Senior Data Scientist, and Jin Yamamoto, Principal Program Manager, both from Microsoft, explain more about these APIs and tools, and how you can easily get started today to make both the cloud and edge visually intelligent.

LOW-POWER AND LOW-COST DEEP LEARNING INFERENCE

Deploying CNN-based Vision Solutions on a $3 MicrocontrollerAu-Zone Technologies
In this presentation, Greg Lytle, VP of Engineering at Au-Zone Technologies, explains how his company designed, trained and deployed a CNN-based embedded vision solution on a low-cost, Cortex-M-based microcontroller (MCU). He describes the steps taken to design an appropriate neural network and then to implement it within the limited memory and computational constraints of an embedded MCU. He highlights the technical challenges Au-Zone Technologies encountered in the implementation and explains the methods the company used to meet its design objectives. He also explores how this type of solution can be scaled across a range of low-cost processors with different price and performance metrics. Finally, he presents and interprets benchmark data for representative devices.


Programmable CNN Acceleration in Under 1 WattLattice Semiconductor
Driven by factors such as privacy concerns, limited network bandwidth and the need for low latency, system designers are increasingly interested in implementing artificial intelligence (AI) at the edge. Low-power (under 1 Watt), low-cost (under $10) FPGAs, such as Lattice’s ECP5, offer an attractive option for implementing AI at the edge. In order to achieve the best balance of accuracy, power and performance, designers need to carefully select the network model and quantization level. This presentation from Gordon Hands, Director of Marketing for IP and Solutions at Lattice Semiconductor, uses two application examples to help system architects better understand feasible solutions. These examples illustrate the trade-offs of network design, quantization and performance.

UPCOMING INDUSTRY EVENTS

Deep Learning for Computer Vision with TensorFlow Training Class: October 4, 2018, San Jose, California

Embedded Vision Summit: May 20-23, 2019, Santa Clara, California

More Events


FEATURED NEWS

NVIDIA Reinvents Computer Graphics with Turing Architecture

Embedded Vision in Packaging Applications from Vision Components

Series Production Start: Four New Basler ace U Models with the IMX287 and IMX273 Sensors from Sony

OmniVision and BioVision Announce Turnkey Medical CMOS Camera Platform for Disposable, In-Office Endoscopy

Out Now: Allied Vision's GigE Camera Mako G with Sony IMX273 and IMX287 CMOS Sensors

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top