Efficient and Effective Models for Machine Reading Comprehension

演讲人: Wei Yu Carnegie Mellon University
时间: 2018-10-19 15:00-2018-10-19 16:00
地点:FIT 1-222
内容:

Machine reading comprehension has attracted lots of attentions in the machine learning and natural language processing communities. In this talk, I will introduce two efficient and effective models to approach this task.

Firstly, I will propose a model, LSTM-Jump, that can skip unimportant information in sequential data, mimicking the skimming behavior of human reading. Trained with an efficient reinforcement learning algorithm, this model can be several times faster than a vanilla LSTM in inference time.

Then I will introduce a sequence encoding method that discards recurrent networks, which thus fully supports parallel training and inference. Based on this technique, a new question-answering model, QANet, is proposed. Combined with data augmentation approach via back-translation, this model achieves No.1 performance in the competitive Stanford Question and Answer Dataset (SQuAD) as of Aug 2018, while being times faster than the prevalent models. Notably, the exact match score of QANet has exceeded human performance by a large margin.

个人简介:

(Adams) Wei Yu is a Ph.D. candidate in Machine Learning Department at Carnegie Mellon University, advised by Professor Jaime Carbonell and Alex Smola. His research interest is in artificial intelligence, encompassing deep learning, large-scale optimization and natural language processing. The main theme of his research is to accelerate AI by designing efficient models and algorithms. His research work has been published in various leading conferences and journals, including ICML, NIPS, ICLR, ACL, COLT, JMLR, AISTATS, AAAI and VLDB. His paper has been selected in INFORMS 2014 Data Mining Best Student Paper Finalist, and his coauthored paper was nominated as Best Paper in ICME 2011. He is a Nvidia PhD Fellow, Snap PhD Fellow, Siebel Scholar and CMU Presidential Fellow. He served as the workflow Chair of AISTATS 2017.