Speaker: Xi Wang is an ETH Fellow in the Advanced Interactive Technologies lab ETH Zurich
Time: 2023-12-21 16:30-2023-12-21 17:30
Venue: C19-2 or https://meeting.tencent.com/dm/sbSrNWRepCFZ
Abstract:
Research in artificial intelligence (AI) continues to advance quickly and outperforms humans in many tasks, making its way into our daily lives. However, beneath their superior performance, current technologies, limited in how to perceive, process, and understand our visual world, struggle with understanding and interacting with people. These issues raise the core question of my research: How do we build intelligent systems that can interact with people and offer assistance in a natural and seamless way? In this talk, I will present our recent work on using vision-language models and large language models for action understanding in egocentric videos.
Short Bio:
Xi Wang is an ETH Fellow in the Advanced Interactive Technologies lab led by Otmar Hilliges and the Computer Vision Lab led by Luc Van Gool at ETH Zurich. Her research focuses on human-centric learning – to bring human common sense and behavior patterns into machine learning and to enable machines to interact intelligently with humans and the world. These days she is interested in how humans’ intent drives their actions and interactions with their surroundings. She received her Ph.D. in Computer Science at the Technical University of Berlin, advised by Marc Alexa. During her Ph.D., she visited MIT working in the Computational Perception & Cognition Group led by Aude Oliva, and interned at Adobe Research working with Zoya Bylinskii and Aaron Hertzmann.