Title: Vision for Robotics: a View from Autonomous Driving
Speaker: Yang Gao UC Berkeley
Time: 2018-12-31 10:00-2018-12-31 11:00
Venue: FIT 1-222


Autonomous driving has attracted a lot of attention in the past few years. A typical self-driving system needs manual design and coordination across multiple modules, such as perception, behavior prediction, planning and trajectory generation. Those manually designed information flow might be far from optimal. Deep learning has dramatically improved image recognition performance by replacing the manually developed pipeline. Inspired by this revolution, we study the possibility of end-to-end approach in autonomous driving. This talk contains three parts. First, we investigate whether the end-to-end method can learn complex behaviors in urban driving scenarios. After that, we talked about how to run the above-trained agent on a real vehicle. In the end, we introduce a new training scheme that combines imitation learning and reinforcement learning in a unified framework, which could achieve high sample efficiency and promising performance at the same time.

Short Bio:

Yang Gao is a 5th year Ph.D. student in the Computer Science Department at UC Berkeley, advised by Professor Trevor Darrell. He is mainly interested in computer vision and robotic learning.  Before that, he graduated from the computer science department at Tsinghua University, where he worked with Prof. Jun Zhu on Bayesian inference. He has interned in Google Research on natural language processing from 2011 to 2012, with Dr. Edward Y. Chang and Dr. Fangtao Li and in Waymo autonomous driving team during the summer of 2016. He also worked on autonomous driving problem at Intel research during the summer of 2018, with Dr. Vladlen Koltun.