时间： 2023-02-24 16:00-2023-02-24 17:00
地点：FIT 1-222 + Tencent Meeting（ID：891-892-817）
Consider a regression problem where the learner is given a large collection of d-dimensional data points but can only query a small subset of the real-valued labels. How many queries are needed to obtain a 1+ϵ relative error approximation of the optimum? While this problem has been extensively studied for least squares regression, little is known for other losses. An important example is least absolute deviation regression (ℓ1 regression) which enjoys superior robustness to outliers compared to least squares. We develop a new framework for analyzing importance sampling methods in regression problems, which enables us to show that the query complexity of least absolute deviation regression is Θ(d/ϵ^2) up to logarithmic factors. We further extend our techniques to show the first bounds on the query complexity for any ℓp loss with p∈(1,2).
As a key novelty in our analysis, we introduce the notion of robust uniform convergence, which is a new approximation guarantee for the empirical loss. While it is inspired by uniform convergence in statistical learning, our approach additionally incorporates a correction term to avoid unnecessary variance due to outliers. This can be viewed as a new connection between statistical learning theory and variance reduction techniques in stochastic optimization, which should be of independent interest.
Based on join work with Michal Derezinski (University of Michigan).
Xue Chen is a faculty member in the school of computer science of University of Science and Technology of China. He received a BS from the Yao Class in Tsinghua University and a PhD from the University of Texas at Austin. He was a postdoc in Northwestern University (USA). His research focus esprimarily on randomized algorithms. Specific areas include fast Fourier transform, big-data algorithms, learning theory, pseudo random-generators, coding theory, and cryptography.