Most people interact with machine learning systems on a daily basis. Such interactions often happen in strategic environments where people have incentives to manipulate the learning algorithms. As machine learning plays a more prominent role in our society, it is important to understand whether existing algorithms are vulnerable to adversarial attacks and, if so, design new algorithms that are robust in these strategic environments. In this talk, I will focus on two lines of my recent work on designing fast and provably robust learning algorithms: making non-convex matrix completion approaches robust against semi-random adversaries, and designing robust high-dimensional statistical estimators that can be computed almost as efficiently as their non-robust counterparts. Most of the talk is based on joint work with Ilias Diakonikolas and Rong Ge.
Yu Cheng is an assistant professor in the Mathematics (MSCS) Department at the University of Illinois at Chicago. He obtained his Ph.D. from the University of Southern California and he was a postdoc at Duke University. His research interests include machine learning, game theory, and optimization.