fbpx

Embedded Vision Insights: October 22, 2019 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,TensorFlow Training Classes

Deep Learning for Computer Vision with TensorFlow 2.0 and Keras is the Embedded Vision Alliance's in-person, hands-on technical training class. This one-day hands-on overview will give you the critical knowledge you need to develop deep learning computer vision applications with TensorFlow 2.0 and Keras for rapid development of neural networks. Our next session will take place one week from this Friday—on November 1 in Fremont, California, hosted by Alliance Member company Mentor. Details, including online registration, can be found here.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

LOCALIZATION AND MAPPING

Fundamentals of Monocular SLAMCadence
Simultaneous Localization and Mapping (SLAM) refers to a class of algorithms that enable a device with one or more cameras and/or other sensors to create an accurate map of its surroundings, to determine the device’s location relative to its surroundings and to track its path as it moves through this environment. This is a key capability for many new use cases and applications, especially in the domains of augmented reality, virtual reality and mobile robots. Monocular SLAM is a type of SLAM that relies exclusively on a monocular image sequence captured by a moving camera. In this talk, Shrinivas Gadkari, Design Engineering Director at Cadence, introduces the fundamentals of monocular SLAM algorithms, from input images to 3D map. He takes a close look at key components of monocular SLAM algorithms, including Oriented Fast and Oriented Brief, fundamental matrix-based pose estimation, stitching together poses using translation estimation and loop closure. He also discusses implementation considerations for these components, including arithmetic precision required to achieve acceptable mapping and tracking accuracy.


Improving the Safety and Performance of Automated Vehicles Through Precision LocalizationVSI Labs
How does a self-driving car know where it is? Phil Magney, founder of VSI Labs, explains in this presentation how autonomous vehicles localize themselves against their surroundings through the use of a variety of sensors along with precision maps. He explores the pros and cons of various localization methods and the trade-offs associated with each of them. He also examines the challenges of keeping mapping assets fresh and up-to-date through crowdsourcing.

DEPTH SENSING

REAL3 Time of Flight: A New Differentiator for Mobile PhonesInfineon Technologies
In 2019, 3D imaging has become mainstream in mobile phone cameras. What started in 2016 with the first two smartphones using an Infineon 3D time-of-flight (ToF) imager has now become a major differentiator for mobile phone manufacturers. Today, all leading smartphone brands have launched phones with 3D imaging technologies. They are using 3D vision to sense the world. Infineon's latest REAL3 technology has helped to enable novel approaches in mobile authentication, computational photography and gesture control, and allowed users to experience augmented worlds. What has changed since 2016? Where are 3D mobile applications today and how will they improve with the next generation of 3D imagers? In this talk, Walter Bell, 3D Imaging Application Engineer at Infineon Technologies, answers these and other questions, guiding you to shape the future of 3D imaging.

Applied Depth Sensing with Intel RealSenseIntel
As robust depth cameras become more affordable, many new products will benefit from true 3D vision. This presentation from Sergey Dorodnicov, Software Architect at Intel, highlights the benefits of depth sensing for tasks such as autonomous navigation, collision avoidance and object detection in robots and drones. Dorodnicov explores a fully functional SLAM pipeline built using free and open source software components and an off-the-shelf Intel RealSense D435i depth camera, and shows how it performs for real-time environment mapping and tracking.

UPCOMING INDUSTRY EVENTS

Technical Training Class – Deep Learning for Computer Vision with TensorFlow 2.0: November 1, 2019, Fremont, California

Renesas Webinar – Renesas' Dynamically Reconfigurable Processor (DRP) Technology Enables a Hybrid Approach for Embedded Vision Solutions: November 13, 2019, 10:00 am PT

Lattice Semiconductor Webinar – Delivering Milliwatt AI to the Edge with Ultra-Low Power FPGAs: November 19, 2019, 11:00 am PT and November 21, 2019, 6:00 am PT

Embedded AI Summit: December 6-8, 2019, Shenzhen, China

Embedded Vision Summit: May 18-21, 2020, Santa Clara, California

More Events

VISION PRODUCT OF THE YEAR SHOWCASE

Morpho's Video Processing Software (Best Software or Algorithm)Morpho
Morpho's Video Processing Software (MVPS) is the 2019 Vision Product of the Year Award Winner in the Software or Algorithm category. MVPS is a suite of video processing software IPs targeted for embedded devices and cloud-based services. MVPS is application software which packages a variety of video enhancement technologies that are independent from existing plug-ins such as those provided through Adobe's editing software. MVPS is superior to other existing solutions in the market in its capabilities to achieve faster processing speed while maintaining its quality. This is due to Morpho having significant experience providing image and video processing software implemented and operated on edge devices like smartphones.

Please see here for more information on Morpho and its Video Processing Software product. The Vision Product of the Year Awards are open to Member companies of the Embedded Vision Alliance and celebrate the innovation of the industry's leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes leadership in computer vision as evaluated by independent industry experts.

FEATURED NEWS

Intel Enables AI Acceleration and Brings New Pricing to Xeon W and X-Series Processors

New OmniVision Image Sensor Combines High Dynamic Range Video Capture and Ultra Wide Angle Still Image Performance

AImotive's aiWare to be Integrated Into Nextchip Apache5 Imaging Edge Processor

Allied Vision Announces Three New High-resolution Prosilica GT Cameras

Vision Components' Shielded MIPI Cables Deliver High Data Rates

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top