Speaker: Diyi Yang Stanford University
Time: 2023-12-25 10:00-2023-12-25 12:00
Venue: FIT 1-222
Abstract:
Large language models have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, we discuss several approaches to enhancing human-AI interaction using LLMs. The first one explores how large language models transform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. The second part develops parameter-efficient learning techniques for adapting LLMs to low-resource languages and dialects towards accessible human-AI interaction. We conclude by building interactive LLM agents that support therapists via AI-generated feedback. These different works demonstrate how human-AI interaction via LLMs can empower individuals and foster positive change.
Short Bio:
Diyi Yang is an assistant professor in the Computer Science Department at Stanford University, also affiliated with the Stanford NLP Group, Stanford HCI Group, and Stanford Human-Centered Artificial Intelligence (HAI). Diyi received her PhD from Carnegie Mellon University and her bachelor's degree from Shanghai Jiao Tong University. Her research focuses on natural language processing, machine learning, and computational social science. Her work has received more than 10 best paper nominations or awards at top NLP and HCI conferences (e.g., ACL, EMNLP, SIGCHI, ICWSM, and CSCW). She is a recipient of IEEE “AI 10 to Watch” (2020), the Intel Rising Star Faculty Award (2021), the Samsung AI Researcher of the Year (2021), the Microsoft Research Faculty Fellowship (2021), the NSF CAREER Award (2022), and an ONR Young Investigator Award (2023).