Bookmark and Share

Blog

The Embedded Vision Summit is a great industry event for all those involved with vision and surrounding technologies.

What if we were to develop a new technology that was better than GPS, one that worked in urban environments, as well as indoors?

Autonomous devices rely on vision to enable them to safely move about their environments and meaningfully interact with objects around them.

Deep learning is appealing because image and video data is massive and rich with information but is infinitely variable and often ambiguous.

As smartphones cameras improve, the market has come to expect DSLR-like image and video quality as well as DSLR-like features.

Key to an understanding of how computer vision will evolve is the reality that it's an enabling technology, not an end in itself.

"Deep learning" artificial neural networks are significantly better than previous techniques with a diversity of visual understanding tasks.

While employing vision and deep learning in embedded systems poses a challenge, it is becoming a requirement.

Whilst devices are everywhere you look, the financial gain is slower to catch up.

HoloLens has nailed both the "feels real" and ease-of-use aspects of a "mixed reality" glasses product.