Speaker: Zhongyu Li UC Berkeley
Time: 2023-08-31 15:00-2023-08-31 16:00
Venue: C19-2 or Tencent Meeting: 398-077-2882, Code：095015
In this talk, I will provide a brief introduction about our recent progress in applying optimal control and deep reinforcement learning (RL) on legged robots in the real world. I will introduce some details of our recent success in utilizing RL to realize robust and dynamic legged locomotion control in the real world, such as bipedal jumping. I will then dive into our recent work to bridge model-based safety-critical control and model-free RL on a highly nonlinear and complex system, such as a bipedal robot Cassie. Bridging model-based safety and model-free RL for dynamic robots is appealing since model-based methods can provide formal safety guarantees, while RL-based methods are able to exploit robot agility by learning from the full-order system dynamics. I will discuss a new method to combine them by explicitly finding a low-dimensional model of the system controlled by a RL policy. This talk will not be limited to legged locomotion but will discuss the potential to empower legged robots to have more intelligence.
Zhongyu Li is a fifth-year PhD student in Mechanical Engineering at UC Berkeley. He is advised by Prof. Koushil Sreenath and focuses on optimal control and reinforcement learning (RL) for legged robots. His work has enabled a bipedal robot Cassie to perform robust and agile maneuvers and to navigate autonomously in unknown and cluttered environments. Zhongyu’s work has been the Best Entertainment and Amusement Paper Finalist (IROS 2020), the Best Service Robot Paper Finalist (ICRA 2021), and the Best RoboCup Paper Finalist (IROS 2022). He is selected as one of RSS Pioneers in 2023.