fbpx

Seeing Clearer – Driving Toward Better Cameras for Safer Vehicles

AAIAAQDGAAoAAQAAAAAAAAq8AAAAJDVlMDllMzM4LTY0NGMtNDI4Ni1hNTZiLTU3ZjQ4NzA4MTNhMg

This article was originally published by Dave Tokic of Alliance member company Algolux. It is reprinted here with Tokic's permission.

To really reduce vehicle accidents and fatalities, the cameras in our cars need to “see” better… much better. This has been a major theme in the automotive industry that has become even more urgent as traffic fatalities are again on the rise.

I’ve been fortunate to recently participate in some of the leading industry events looking at this specific issue, such as AutoSens in Brussels, Auto.AI Europe in Berlin, IEEE P2020 working group meetings, and the Embedded Vision Alliance (EVA) Industry and Technology Forum. A number of key takeaways I captured were:

Cameras will be the dominant sensing tech for driven and autonomous vehicles

With NHTSA’s rearview camera requirement for new vehicles going into effect around the corner in May 2018 and Euro NCAP’s rating system and timeline (Richard Schram of Euro NCAP gave a great talk on this at AutoSens) already taking into account lane keeping and Automatic Emergency Braking (AEB), automakers have accelerated the integration of rear and surround-view cameras across all tiers of vehicles. Yole Developpement is forecasting automotive imaging sensor market growth at 24% CAGR through 2022 and Roger Lanctot of Strategy Analytics shared during the EVA event that automotive camera sensor volume will reach roughly 250M per year in 2024. Numerous other reports show OEMs integrating many more cameras into cars at a rapid pace each model year, moving from 1 camera (rearview) to 8-12 cameras for autonomous cars and trucks.

Camera image quality is critical for vehicle safety

This is true for both the camera-based viewing systems a driver uses to more effectively see and avoid people and obstacles in the car’s path, but also for the computer vision based ADAS and autonomous driving systems (more on that in a later post). So much so that the automotive industry together with the IEEE got together a little over a year ago and created the IEEE P2020 Automotive Image Quality (IQ) Working Group to explicitly develop metrics for image quality for both driver viewing and computer vision.

Achieving optimal image quality is really hard

Those little cameras on your vehicles that project video to your dashboard display, and soon to door mounted screens for side mirror replacement, are quite complex devices that need to perform in much harsher and more difficult conditions than your cellphone camera. They have better optics to try to minimize flare and other aberrations, high dynamic range (HDR) sensors to quickly deal with quickly shifting lighting conditions, and sophisticated image processors (ISPs) to deliver a good image. Among the many excellent presentation at AutoSens (next one in Detroit, May 2018), there were a number of insightful presentations and panels outlining the challenges and approaches to dealing with automotive imaging.

  • Alexis Lluis-Gomez of ARM shared details of how ISPs work and what they need to deal with in addressing automotive use cases
  • Tarek Lule of ST covered the challenges in automotive for HDR imaging dealing with LED flicker, ghosting, and motion blur
  • Abhay Rai of Sony regarding sensor sensitivity and noise mitigation for imaging in low light
  • Marc Geese of Bosch emphasized the critical automotive imaging use cases and metrics to evaluate imaging performance
  • Felix Heide of Algolux on new approaches to automate optimal image quality and increase robustness of vision systems in difficult imaging scenarios
  • Panel discussion on image sensing regarding pixel count, IQ, and extending cameras can see with Prof. Patrick Denny (Valeo), Dr. Martin Punke (Continental), Abhay Rai (Sony), and moderated by Dr. Sven Fleck (SmartSurv)

A key thread I saw throughout was that for the camera to produce high-quality output images across very dynamic conditions (darkness, going in and out of tunnels, weather, dirty lenses, etc.) today requires a) ongoing improvements in the components for these tasks and b) a lot of expert manual effort to tune each camera configuration as best they can per vehicle model against the various metrics (KPIs) being used and proposed. There are many companies doing very innovative work but we still have a way to go.

Ultimately, having “autonomous vision” systems that see and perceive much more clearly than today is fundamental to enabling safer vehicles.  We at Algolux are excited to be part of the community working on these challenges to achieve that goal sooner.

By Dave Tokic
Vice President of Marketing and Business Development, Algolux

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top