Speaker: Sizhe Lester Li MIT EECS
Time: 2024-08-08 15:00-2024-08-08 16:00
Venue: Zoom Meeting ID: 898 9775 3226 Passcode: 584190
Abstract:
Using vision, humans constantly perceive where they occupy space and reason how their muscle commands relate to motions in the physical world. In robotics, these two capabilities translate to the problems of modeling and control of an actuated system. While conventional robots were designed to make their modeling and control easy, it remains an open challenge to model and control bio-inspired robots that are often multi-material or soft, lack sensing capabilities, and may change their material properties with use.
In this talk, I will introduce Neural Jacobian Fields and pixelSplat, two architectures that can autonomously learn to model and control robots from vision alone. Our approach makes no assumptions about the robot’s materials, actuation, or sensing, requires only a single camera for control, and learns to control the robot without expert intervention by observing the execution of random commands. We demonstrate our method on a diverse set of robot manipulators, varying in actuation, materials, fabrication, and cost. Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot. By enabling robot control with a generic camera as the only sensor, we anticipate our work will dramatically broaden the design space of robotic systems and serve as a starting point for lowering the barrier to robotic automation.
Short Bio:
Sizhe Lester Li is a rising second-year Ph.D. student from MIT EECS, advised by Prof. Vincent Sitzmann and Prof. Josh Tenenbaum. Lester’s research lies at the intersection of vision+graphics, robotics, and computational cognitive science. His research connects intuitive physics with scene perception and robotic planning, grounded in inverse modeling. Lester is supported by the MIT Presidential Fellowship.