fbpx

In Embedded Vision, Sensors Rule

EE_Times-logo

By Vin Ratford
Executive Director, Embedded Vision Alliance


This blog post was originally published at EE Times' Industrial Control Design Line. It is reprinted here with the permission of EE Times.

This week, I've invited my colleague Vin Ratford to share his perspective on the central role of image sensors in embedded vision. Vin has a keen eye for technology trends, having spent over 30 years in the electronics industry, most recently as senior VP of Xilinx. — Jeff

Machines that see and understand are only as good as their eyes — image sensors. This may be obvious to those who work in the sensor business or who are experienced in computer vision. But when I was working at Xilinx, I mistakenly thought that processors, software, and algorithms were the key to developing systems with visual intelligence.

As I started working with the Embedded Vision Alliance, however, I learned a great deal about how important image sensors are. The selection of the sensor is often the first step in developing a vision system. And the choice of sensor (as well as image preprocessing steps, such as dynamic range enhancement) has an enormous impact on what types of vision algorithms can be used, and how effective they are.

At the Embedded Vision Summit conference in San Jose, Calif., in April, a session devoted to Image Sensors and Front-End Image Processing introduced advances in 3D sensors and highlighted how vision algorithms are impacted by the choice of sensors and image preprocessing steps. For those new to the topic, the Alliance and its member companies recently published an article on 3D sensors and how their ability to discern depth can lead to exciting new capabilities. It also discusses the tradeoffs between various 3D sensor approaches (stereoscopic vision, structured light, and time of flight).

The Embedded Vision Alliance's website also contains a wealth of information on image sensors, both for newcomers and for those with some existing expertise on the technology and products. I encourage you, for example, to check out an article authored by Alliance member BDTI and published in EDN Magazine last summer, along with an earlier tutorial article. BDTI also delivered presentations on image sensors at two Alliance member meetings last year, both of which are available in archived video form.

At the most recent meeting of Embedded Vision Alliance member companies, we heard a fascinating presentation by Professor Masatoshi Ishikawa from the University of Tokyo. Professor Ishikawa showed how a high-speed, vision-based robot with a frame rate of 1000 fps can dramatically simplify vision algorithms and achieve real-time response far beyond what humans can do (like catching an egg in mid-flight).

We also saw a fascinating demo of Alliance member company Aptina's new high-dynamic range sensor. Its ability to deliver rich, clear images in very low light was remarkable — significantly better than what a human eye can achieve. As I think of all the "Kodak moments" that I've lost over the years with my cellphone camera because of insufficient light, I look forward to having this type of sensor in my next phone!

As a semiconductor veteran, I continue to believe that the industry must continue to leverage Moore's law to improve processors with increased performance, reduced power, and lower cost. But now I also see that innovations in sensors will be key to enabling ubiquitous "machines that see and understand."

As processor specialists like me start to learn about sensors, and sensor experts learn more about processors, opportunities emerge for real innovation. This is one of the reasons I was excited about helping to start the Embedded Vision Alliance. A few years ago, the ecosystem for embedded vision technology was disaggregated and chaotic. We hoped that the Alliance could bring it together, to the benefit of suppliers and designers alike. My education in sensors shows that that's starting to happen.

I plan to continue my education at the next Embedded Vision Summit conference in the Boston area on October 2, 2013. I hope you can join me there and help continue the conversation.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top