Building computer systems to read and comprehend natural language documents remains a long-standing challenge in Artificial Intelligence. In the past few years, thanks to the rapid development of deep neural networks and the emergence of new paradigms for langauge representations (e.g. pre-training and fine-tuning), we have seen unprecedented success in many natural language understanding benchmarks. How far have we gone so far? How far are we still away from that grand goal of achieving human-level understanding? In this talk, I will share many exciting results in NLP and introduce some of my recent work on machine reading and question answering. I also would like to discuss some biggest lessons we learned and promising future directions.
Danqi Chen is an Assistant Professor of Computer Science at Princeton University and co-leads the Princeton NLP Group. Danqi’s research interests lie in natural language processing and machine learning (deep learning in particular) and her research centers on how computers can achieve a deep understanding of human language and the information it contains. Before joining Princeton in Fall 2019, Danqi worked as a visiting scientist at Facebook AI Research (FAIR). She received her PhD from the Department of Computer Science at Stanford University in 2018 and B.E. from Yao Class at Tsinghua University in 2012. In the past, she was a recipient of Outstanding Paper Awards at ACL’16 and EMNLP’17, a Facebook Fellowship, and a Microsoft Research Women’s Fellowship.