Improved Guarantees for Agnostic Learning of Disjunctions

演讲人: Avrim Blum Carnegie Mellon University
时间: 2010-07-01 14:00-2010-07-01 15:00

Given some arbitrary distribution D over {0,1}^n with points labeled by an arbitrary target function c, the goal in agnostic learning of disjunctions is to achieve an error rate comparable to the error OPT of the best disjunction (the best OR-function over a subset of {x_1,...,x_n}) with respect to (D,c). While there have been positive results for the case of "nice" distributions D (e.g., uniform), nearly all results for the case of general distributions have been negative. In recent work, [Peleg07] shows how for general D to achieve error O (sqrt(n)*OPT) + epsilon in polynomial time. In this work we improve on Peleg's bound, giving a polynomial-time algorithm achieving a bound of O(n^{1/3 + alpha}*OPT) + epsilon, for any constant alpha>0. This leads to a clear open question of how low can one go.

This is joint work with Pranjal Awasthi and Or Sheffet.



Avrim Blum is Professor of Computer Science at Carnegie Mellon University and a member of the ITCS Chair Professor Team. His main research interests are in Machine Learning Theory, Approximation and Online Algorithms, and Algorithmic Game Theory. He has served as Program Chair for the IEEE Symposium on Foundations of Computer Science (FOCS) and the Conference on Learning Theory (COLT), as well as on the organizing committee for the National Academy of Sciences Frontiers of Science Symposium. He was recipient of the Sloan Fellowship and NSF National Young Investigator Awards, the ICML/COLT 10-year best paper award, and is a Fellow of the ACM.