ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
Citations Over TimeTop 1% of 2019 papers
Abstract
Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a real car at our test facility.
Related Papers
- → Robustness assessment of complex networks using the idle network(2022)4 cited
- → Development and Design Optimization of 2Y Hexarotor with Robustness against Rotor Failure(2020)2 cited
- Robust resource loading for engineer-to-order manufacturing(2004)
- → Researching robustness of information system for measuring of microcontrollers average power consumption(2017)
- → Robustness Assessment of Complex Networks using the Idle Network(2022)