Speaker: Avrim Blum Carnegie Mellon University
Time: 2010-07-01 14:00-2010-07-01 15:00
Venue: FIT 1-222
Abstract:
Given some arbitrary distribution D over {0,1}^n with points labeled by an arbitrary target function c, the goal in agnostic learning of disjunctions is to achieve an error rate comparable to the error OPT of the best disjunction (the best OR-function over a subset of {x_1,...,x_n}) with respect to (D,c). While there have been positive results for the case of "nice" distributions D (e.g., uniform), nearly all results for the case of general distributions have been negative. In recent work, [Peleg07] shows how for general D to achieve error O (sqrt(n)*OPT) + epsilon in polynomial time. In this work we improve on Peleg's bound, giving a polynomial-time algorithm achieving a bound of O(n^{1/3 + alpha}*OPT) + epsilon, for any constant alpha>0. This leads to a clear open question of how low can one go.
This is joint work with Pranjal Awasthi and Or Sheffet.
Short Bio: