For the last decade, the autonomous vehicle revolution has hovered just beyond the horizon. In 2012, Google co-founder Sergey Brin promised that the company’s self-driving cars would be available for everyone within five years. Another five years hence, and the dream of fully automated travel continues to elude the general public.
The primary reason for this is safety. The artificial intelligence behind the computer visual sensing and perception systems for vehicles continues to improve, implementing new advances in deep learning, but these systems remain inferior to their biological counterpart: the human eye. Indeed, even the eyes of small insects like bees outperform the most advanced artificial vision systems in typical functions such as real-time sensing and processing and low-latency motion control.
A new paper from IEEE researchers outlines a new technology that aims to bridge this gap by implementing bio-inspired elements into sensing technology. Still largely unexplored by the automotive industry, event-based neuromorphic vision sensors operate on an entirely different set of principles from traditional sensors, such as CMOS cameras. By mimicking the biological retina from both the system and element level, this new technology—coupled with radar, lidar, and ultrasound systems—could be the key to reaching the safety threshold to enable autonomous transportation’s widespread adoption.
Event-based neuromorphic vision sensors enable lower power and latency than traditional cameras, since each pixel works independently, eliminating the need for global exposure of the frame. Sensors such as DVS can enable twice the dynamic range of frame-based cameras, adapting to very dark and bright stimuli for scenarios such as entering and emerging from a tunnel. Further, analog circuitry enables brightness changes to be captured very quickly; with a 1MHz clock, events can be detected and timestamped with microsecond resolution. Finally, event-based neuromorphic vision sensors can capture dynamic motion precisely without motion blur, a common issue for traditional sensors.
The paper, which to date has been cited 54 times, aims to provide a foundation for other researchers to build upon and further develop this groundbreaking technology. The authors begin by outlining the derivation of neuromorphic vision sensors from their biological counterparts in the retina, then discuss in depth the physics of signal processing techniques behind the technology. From there, the paper reviews the signal processing algorithms and applications for event-based neuromorphic vision in autonomous vehicles and driver assistance systems. The work culminates with an outline of future research directions for the technology.
“Neuromorphic vision sensors pose a paradigm shift to sense and perceive the world for autonomous vehicles, which is however almost undiscovered by the automotive industry,” Knoll said. “In this tutorial-like paper, we aim to build a bridge between the neuroscience and autonomous driving research communities.”
View the full-text article on IEEE Xplore. Read the first page for free. Full article available with purchase or subscription. Contact us to see if your organization qualifies for a free trial.
Interested in expanding your knowledge in Autonomous Vehicles? IEEE offers continuing education with the IEEE Guide to Autonomous Vehicle course program.