Cabin Sensing for Safe and Personalized Driving

About Brain4Cars

In the US alone, more than 33,000 people die in road accidents every year, and a majority of them are caused by risky driving maneuvers. Almost all of the current driver assistance systems focus only on the external environment, and not inside the cabin of the car. We seek to correct this by jointly sensing the outside and the inside of the car. We monitor the driver through an an array of cabin sensors like cameras, tactile sensors, wearable devices, etc. This enables our system to learn about the driving behaviour of every individual driver. One of our application is to anticipate a driving maneuver several seconds before it happens, which we use to alert the driver.

We use a sensory-fusion deep learning architecture based on Recurrent Neural Networks with Long Short-Term Memory to combine the information from various sensors. Our algorithm anticipates maneuvers in real time, and provides us with probabilistic estimates of what might happen several seconds into the future. Currently our system can anticipate 3.5 seconds into the future with a high accuracy.

How Does Brain4Cars work?

Recurrent Neural Networks for Driver Activity Anticipation via Sensory-Fusion Architecture

Ashesh Jain, Avi Singh, Hema S Koppula, Shane Soh, Ashutosh Saxena

ICRA 2016 [ arXiv] [Code]

Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models

Ashesh Jain, Hema S Koppula, Bharad Raghavan, Shane Soh, Ashutosh Saxena

ICCV 2015 [PDF] [ arXiv] [Code and Data set]

Brain4Cars: Car That Knows Before You Do via Sensory-Fusion Deep Learning Architecture

Ashesh Jain, Hema S Koppula, Shane Soh, Bharad Raghavan, Avi Singh, Ashutosh Saxena

Journal (under review), January 2016 [arXiv] [Code and Data set]

Structural-RNN: Deep Learning on Spatio-Temporal Graphs

Ashesh Jain, Amir R. Zamir, Silvio Savarese, Ashutosh Saxena

Tech Report (under review), November 2015 [arXiv] [supplementary] [Code] [Video]

Brain4Cars: Sensory-Fusion Recurrent Neural Models for Driver Activity Anticipation

Ashesh Jain, Shane Soh, Bharad Raghavan, Avi Singh, Hema S Koppula, Ashutosh Saxena

BayLearn Symposium (Full ORAL) , October 2015 [Extended abstract]

1 We collect thousands of miles of natural driving data from many drivers.

2 We extract contextual information from multiple sources: cameras, GPS, vehicle's dynamics etc.

3 We feed the feature representations into a generative model and anticipate future maneuvers. Predictions are finally forwarded to ADAS.

Video Demonstration

Contact: ;