A wide range of positive and negative results have been established for learning different classes of Boolean functions from uniformly distributed random examples. However, polynomial-time algorithms have thus far been obtained almost exclusively for various classes of monotone functions, while the computational hardness results obtained to date have all been for various classes of general (nonmonotone) functions. Motivated by this disparity between known positive results (for monotone functions) and negative results (for nonmonotone functions), we establish strong computational limitations on the efficient learnability of various classes of monotone functions. We give several such hardness results which are provably almost optimal since they nearly match known positive results. Some of our results show cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than 1/2 + 1/sqrt(n); this accuracy bound is close to optimal by known positive results (Blum et al., FOCS '98).Other results show that under a plausible cryptographic hardness assum ption, a class of constant-depth, sub polynomial-size circuits computing monotone functions is hard to learn; this result is close to optimal in terms of the circuit size parameter by known positive results as well (Servedio, Information and Computation '04). Our main tool is a complexity-theoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O.Donnell (JCSS'04).
Joint work with Dana Dachman-Soled, Homin K. Lee, Tal Malkin, Rocco A.Servedio, and Hoeteck Wee.
Andrew Wan is a PhD student at Columbia University under the supervision of Professor Rocco Servedio and Professor Tal Malkin. His main interestes are computational complexity and relationships between cryptography and learning theory.