I am the co-founder and CEO of Coram AI. Coram AI builds video intelligence systems for businesses of all sizes. With Coram AI, each camera "works for you" instead of just being a video recorder. We help our customers gain new insights into their operations, safety, and security. In my past life, I was the Head of Autonomy at Lyft Self-Driving Program. My team is responsible for all on-vehicle perception & planning capabilities of Lyft's Autonomous Vehicle. This incldues Computer Vision, 3D Perception, Prediction, Motion planning, and the machine learning infrastructure to deploy real-time deep learned models on the vehicle platform. Prior to Lyft, I was at Zoox.
In my academic life, I obtained PhD in Computer Science from Cornell University. I was a visiting research scholar at the Stanford AI Lab where I started the Brain4Cars and RoboBrain projects. I also have a Bachelors degree in Electrical Engineering from IIT Delhi.
News
My recent talk @Scale Conference on self-supervised learning for autonomous driving
Talk on self-supervised learning for Autonomous driving @Scale conference, October 2019
Blog post on sensor calibration for Autonomous Vehicle, August 2019
Lyft open sourced one of the largest 3D Perception and Prediction data set for Autonomous Vehicle, July 2019
Spotlight from Lyft on my journey, Feb 2019
One paper accepted to CVPR 2018.
Joined Lyft Self Driving Program, January 2018
Best student paper award at CVPR 2016 (Deep learning on spatio-temporal graphs)
PhD thesis, May 2016.
Structural-RNN accepted as an ORAL to CVPR 2016.
Our paper on sensory-fusion RNN-LSTM for driver activity anticipation is accepted to ICRA 2016
I recently gave talks at Oculus, University of Washington Seattle, Keynote at the ICCV workshop on Autonomous driving, BayLearn Symposium, Qualcomm, and Zoox Labs on: Deep Learning for Spatio-Temporal Problems: On Cars, Humans, and Robots (ppt with videos, 300MB) (pdf, 30MB)
Neuralmodels: A deep learning package for quick prototyping of structures of Recurrent Neural Networks and for deep learning over spatio-temporal graphs.
Brain4Cars driving data set and sensory-fusion RNN code.
My research interest lies at the intersection of machine learning, robotics, and computer vision. Broadly, I build machine learning systems & algorithms for agents – such as robots, cars etc. – to learn from informative human signals at a large-scale. Most of my work has been in multi-modal sensor-rich robotic settings, for which I have developed sensory fusion deep learning architectures. I have developed and deployed algorithms on multiple robotic platforms (PR2, Baxter etc.), on cars, and crowd-sourcing systems.
Learning From Natural Human Interactions For Assistive Robots
PhD Thesis, Ashesh Jain, May 2016 [PDF]
Brain4Cars: Car That Knows Before You Do via Sensory-Fusion Deep Learning Architecture
Ashesh Jain, Hema S Koppula, Shane Soh, Bharad Raghavan, Avi Singh, Ashutosh Saxena
Tech Report (under review), January 2016 [arXiv] [Code and Data set]
Learning Preferences for Manipulation Tasks from Online Coactive Feedback.
Ashesh Jain, Shikhar Sharma, Thorsten Joachims, Ashutosh Saxena
IJRR 2015 [PDF]
Structural-RNN: Deep Learning on Spatio-Temporal Graphs
Ashesh Jain, Amir R. Zamir, Silvio Savarese, Ashutosh Saxena
CVPR 2016 (Full ORAL) (Best Student Paper) [PDF] [arXiv] [supplementary] [Code] [Video]
Car That Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models
Ashesh Jain, Hema S Koppula, Bharad Raghavan, Shane Soh, Ashutosh Saxena
ICCV 2015 [PDF] [Code and Data set] [arXiv]
Brain4Cars: Sensory-Fusion Recurrent Neural Models for Driver Activity Anticipation
Ashesh Jain, Shane Soh, Bharad Raghavan, Avi Singh, Hema S Koppula, Ashutosh Saxena
BayLearn Symposium 2015 [Extended abstract] (Full ORAL)
PlanIt: A Crowdsourcing Approach for Learning to Plan Paths from Large Scale Preference Feedback.
Ashesh Jain, Debarghya Das, Jayesh Gupta, Ashutosh Saxena
ICRA 2015 [PDF]
RoboBrain: Large-Scale Knowledge Engine for Robots
Ashutosh Saxena, Ashesh Jain, Ozan Sener, Aditya Jami, Dipendra K Misra, Hema S Koppula
ISRR 2015 [arXiv]
Anticipatory Planning for Human-Robot Teams.
Hema S Koppula, Ashesh Jain, Ashutosh Saxena
ISER 2014 [PDF]
Beyond Geometric Path Planning: Learning Context-Driven Trajectory Preferences via Sub-optimal Feedback.
Ashesh Jain, Shikhar Sharma, Ashutosh Saxena
ISRR 2013 [PDF]
Learning Trajectory Preferences for Manipulators via Iterative Improvement.
Ashesh Jain, Brain Wojcik, Thorsten Joachims, Ashutosh Saxena
NIPS 2013 [PDF]
Brain4Cars: Sensory-fusion deep learning for smart-cars
Interactive human-robot learning from coactive feedback
Anticipating driver maneuvers few seconds in advance
Structural-RNN: Deep Learning on Spatio-Temporal Graphs
Invited talk at Oculus (Facebook), April 2016
Keynote at the ICCV workshop on Autonomous driving. Title: Deep Learning for Spatio-Temporal Problems: On Cars, Humans, and Robots. Dec 2015
Invited talk at Zoox Labs (autonomous driving startup), Dec 2015
Talk at University of Washington Seattle Department of Computer Science, Nov 2015
Invited talk at Qualcomm Deep Learning Research Center, Nov 2015
Oral at BayLearn 2015 on Brain4Cars: Sensory-fusion Recurrent Neural Networks (Video)
Invited Talk at RSS Workshop on Model Learning for Human-Robot Communication, July 2015
Invited Talk at ICML Workshop on Machine Learning for Interactive Systems, July 2015
Invited Talk at ICRA Tutorial on Planning, Control, and Sensing for Safe Human-Robot Interaction, May 2015
Invited Talk at IIT Kanpur Department of Computer Science. RoboBrain and Learning from Weak Signals, Feb 2015
Stanford Semantics and Geometry Seminar. RoboBrain and Learning from Weak Signals, Feb 2015
Stanford Robotics Seminar. Learning from Weak Signals, Nov 2014
Introductory talk at LPCHS workshop RSS 2014. Learning from Humans. (Slides)
Cornell AI Seminar and ISRR 2013. Beyond Geometric Path Planning. (Slides)
ICML Robot Learning workshop 2013. (Slides)
Invited spotlight at Mysore Park Workshop on Machine Learning 2012. (Video) (Slides)
Lecture at Indo-German Winter Acadmey 2010. (Slides)