Bookmark and Share

Blog

ADAS is becoming the new sweetheart of the automotive industry, driving most of the innovation within this enormous ecosystem.

Neural network algorithms have gained prominence in computer vision and other fields. The time may be ripe for neural network processors.

Embedded vision in robotics has the potential for drastically improving everything from mundane tasks to life’s most critical functions.

Alliance executive director Vin Ratford discusses educational resources for meeting the challenge of programming heterogeneous processors.

For demanding applications such as embedded vision, heterogeneous multicore architectures often yield the best bang for the buck (or Watt).

One great example of embedded vision's use in our lives is the vision-based safety features in advanced driver assistance systems (ADAS).

Face recognition is admittedly not yet perfect. However, other face analysis technologies are more mature and enable amazing applications.

In the consumer market, one of the most interesting uses of new vision technologies is the creation of more natural user interfaces.

New image sensor capabilities (3D, resolution, frame rate, dynamic range, etc) have a dramatic impact on system architecture and algorithms.

Machines that see and understand are only as good as their image sensors. But I used to think that processors and algorithms were the key.