fbpx

Going Deep: Why Depth Sensing Will Proliferate

bdti

This blog post was originally published in the early September 2016 edition of BDTI's InsideDSP newsletter. It is reprinted here with the permission of BDTI.

If you’ve read recent editions of this column, you know that I believe that embedded vision – enabling devices to understand the world visually – will be a game-changer for many industries.  For humans, vision enables many diverse capabilities: reading your spouse’s facial expression, navigating your car through a parking garage, or threading a needle.  Similarly, embedded vision is now enabling all sorts of devices (from vacuum cleaning robots to cars) to be more autonomous, easier to use, safer, more efficient and more capable.

When we think about embedded vision (or, more generically, computer vision), we typically think about algorithms for identifying objects:  a car, a curb, a pedestrian, etc.  And, to be sure, identifying objects is an important part of visual intelligence.  But it’s only one part.

Particularly for devices that interact with the physical world, it’s important to know not only what objects are in the vicinity, but also where they are.  Knowing where things are enables a camera to focus on faces when taking a photo, a vacuum cleaning robot to avoid getting wedged under the sofa, and a factory robot to safely collaborate with humans.  Similarly, it’s often useful to know the size and shape of objects – for example, to enable a robot to grasp them.

We live in a three-dimensional world, and the location, size and shape of an object is a 3D concept.  It’s sometimes possible to infer the depth dimension from a 2D image (for example, if the size of the object is already known), but in general, it’s much easier to measure the depth directly using a depth sensor.

Historically, depth sensors have been bulky and expensive, like the LiDAR sensors seen on top of Google’s self-driving car prototypes.  But this is changing fast.  The first version of the Microsoft Kinect, introduced in 2010, showed that it was possible – and useful – to incorporate depth sensing into a consumer product.  Since then, many companies have made enormous investments to create depth sensors that are more accurate, smaller, less expensive and less power hungry.  Other companies (such as Google with Project Tango and Intel with RealSense) have invested in algorithms and software to turn raw depth sensor data into data that applications can use.  And application developers are finding lots of ways to use this data.

One of my favorite examples is 8tree, an innovative start-up that designs easy-to-use handheld devices for measuring surface deformities such as hail-damage on car bodies.  And augmented reality games in which computer-generated characters interact with the physical world can be compelling.

There are many types of depth sensors, including stereo cameras, time of flight and structured light.  Some of these, like stereo cameras, naturally produce a conventional RGB image in addition to depth data.  With other depth sensor types, a depth sensor is often paired with a conventional image sensor so that both depth and RGB data are available.  This naturally raises the question of how to make best use of both the RGB and the depth data.  Perhaps not surprisingly, recently researchers have successfully applied artificial neural networks to this problem.

The more our devices know about the world around them, the more effective they can be.  Depth is a key aspect of visual perception, but one that’s been out of reach for most product designers.  Now, thanks to improvements in depth sensors, algorithms, software and processors, it’s becoming increasingly practical to build incorporate sensing into even cost- and power constrained devices like mobile phones. Look, for example, at Apple's just-announced iPhone 7 Plus, along with other recently-introduced dual-camera smartphones such as Huawei's P9, Lenovo's Phab2 Pro, LG's G5 and V20, and Xiaomi's RedMi Pro.

Speaking of improved algorithms, I’ll close with a final mention of a unique opportunity for those of you looking to jump-start your work with deep learning algorithms.  The Embedded Vision Alliance and BDTI are collaborating with the primary developers of the popular Caffe deep learning framework to present a full-day deep learning tutorial on September 22 in Cambridge, Massachusetts.  This tutorial will provide an introduction to deep neural networks and a hands-on introduction to the Caffe framework. For details about this unique event, please visit the tutorial web page.

Jeff Bier
Co-Founder and President, BDTI
Founder, Embedded Vision Alliance

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top