fbpx

Automotive Vision: Eliminating Human Error From The Navigation Equation

In October 2009, on the way back from running a half marathon in San Jose, CA, I encountered a first-of-season snow storm on I80 just east of the highway 20 interchange (an area known as Yuba Gap, for folks familiar with the region).

Although the posted speed limit was 65 mph, given the conditions I'd already slowed to around 30 mph (and yes, I was also wearing my seat belt). Nonetheless, the roadway was slick with the moistened accumulation of a summer's worth of leaked oil and gasoline from abundant passing traffic. And, while crossing over a bridge, I hit 'black ice'.

The highway curved to the right, but I couldn't do the same. Instead, my vehicle slid off the left side of the road, did two complete rolls down an embankment, and ended up upright pointed in the opposite direction, in the shoulder next to the westbound lanes:

I

Although my Jeep Liberty was totaled, it did its job; I walked away from the crash with nothing but a miniscule scratch on one cheek and a slightly stiff neck. Nonetheless, ever since then I've been (understandably, I think) closely following developments in technologies that augment a driver's native skills to keep him-or-her and passengers safe. And as such, I was intrigued to learn last October that Google is aggressively testing autonomous vehicles…which, to be clear, contain human occupants ready to regain manual control in case anything goes wrong:

Back on August 1, regarding the Roomba robotic vacuum cleaner's rudimentary navigation scheme, I wrote:

A more robust approach would leverage embedded vision technology to map the room as it's being traversed, not only remembering the locations of obstacles but also recalling regions that the robot has already traveled through in order to avoid unnecessary path repetition.

And later that same week, back-referencing the earlier writeup, I said:

Robust vision support is at the nexus of robust autonomous robot implementations

To wit, it's no surprise that Google's autonomous Toyota Priuses are outfitted with an array of image sensors providing a 360° view of the world (and, specifically, the road) around them, along with radar and laser range-finders plus abundant processing intelligence (augmented by a live stream of GPS location data and a rich Google Maps-plus-Street View database) to make sense of the multiple simultaneously received video feeds. And ironically, one of the autonomous Priuses just had its first accident…when it was under manual control of its human backup driver:

At this point, feel free to mentally queue up your favorite scene from The Terminator. In all seriousness, while Google's autonomous vehicle experiment may provide a glimpse of the future (I for one would love to be able to get some work done, or at least take a nap, during regular commutes between my home office in Truckee and BDTI's office in Oakland), more moderate examples of embedded vision-enabled driver assistance are capable of being implemented in the here-and-now. Take my 2008 Volvo XC70, for example, which replaced the Jeep Liberty. As I wrote on my old EDN Magazine blog back in December 2009:

Take, for example, Park Assist. The front and back bumpers each embed four ultrasound sensors that operate in conjunction with the car’s sound system (which alas does not have built-in Bluetooth audio connectivity) to alert the driver to close-proximity barriers such as other cars’ bumpers in parallel-parking situations, which is where I find greatest value in the feature. The ‘beep’ tone differs for front- and back-sensor feedback, and the tone rate increases as you get closer to the sensed object. Granted, it isn’t the more elaborate active park assist program that Ford (which bought Volvo a decade ago and is now in the process of selling it) is now implementing:

but it’s still very helpful. A more elaborate form of the feature, called Collision Warning with Brake Support, employs a front-focused radar system and was not included in my car. Nor was the optional Blind Spot Information System, which leverages cameras built into both side-view mirrors.

My car’s got a conventional cruise control system, albeit with a Hill Descent Control twist. As the owner’s manual states:

"Normally, when the accelerator pedal is released while driving down hills, the vehicle’s speed slows as the engine runs at lower rpm (the normal engine braking effect). However, if the downhill gradient becomes steeper and if the vehicle is carrying a load, speed increases despite the engine braking effect. In this situation, the brakes must be applied to reduce the vehicle’s speed. HDC is a type of automatic engine brake and makes it possible to increase or decrease the vehicle’s speed on downhill gradients using only the accelerator pedal, without applying the brakes. The brake system functions automatically to maintain a low and steady speed. HDC is particularly useful when driving down steep hills with rough surfaces, and where the road may have slippery patches."

Again, my particular vehicle didn’t come with the more elaborate Adaptive Cruise Control that maintains a set distance from cars ahead of it (using the same radar system employed by Collision Warning With Brake Support). But I do love how its conventional cruise control system still allows me to dial in an exact (MHP or KPH) target speed.

And keep in mind that at this point, mine's a near-four-year-old vehicle. Newer models from both Volvo and other manufacturers, especially higher-end variants, contain even more elaborate systems employing not only ultrasonic sensors but also image sensor-fed intelligence. And inevitably, what exists in a luxury vehicle today will sooner or later proliferate into higher-volume mainstream designs.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top