In the US alone, more than 33,000 people die in road accidents every year, and a majority of them are caused by risky driving maneuvers. Almost all of the current driver assistance systems focus only on the external environment, and not inside the cabin of the car. We seek to correct this by jointly sensing the outside and the inside of the car. We monitor the driver through an an array of cabin sensors like cameras, tactile sensors, wearable devices, etc. This enables our system to learn about the driving behaviour of every individual driver. One of our application is to anticipate a driving maneuver several seconds before it happens, which we use to alert the driver.
We use a sensory-fusion deep learning architecture based on Recurrent Neural Networks with Long Short-Term Memory to combine the information from various sensors. Our algorithm anticipates maneuvers in real time, and provides us with probabilistic estimates of what might happen several seconds into the future. Currently our system can anticipate 3.5 seconds into the future with a high accuracy.
1 We collect thousands of miles of natural driving data from many drivers.
2 We extract contextual information from multiple sources: cameras, GPS, vehicle's dynamics etc.
3 We feed the feature representations into a generative model and anticipate future maneuvers. Predictions are finally forwarded to ADAS.
Contact: firstname.lastname@example.org ; email@example.com