Improved Guarantees for Agnostic Learning of Disjunctions
March 23, 2011
ABSTRACT:
Given some arbitrary distribution D over the Boolean hypercube and arbitrary
target function c*, the problem of agnostic learning of
disjunctions is to achieve an error rate comparable to the error
OPT of the best disjunction with respect to (D,c*).
Achieving error O(n OPT) + \epsilon is trivial, and
the well known Winnow algorithm achieves error O(r OPT) +
\epsilon, where r is the number of relevant variables in the best
disjunction. In recent work, Peleg shows how to
achieve a bound of O(\sqrt{n} OPT) + \epsilon in
polynomial time. In this work we improve on Peleg's bound, giving a
polynomial-time algorithm achieving a bound of O(n^{1/3 + \alpha}
OPT) + \epsilon for any constant alpha>0. The heart of
the algorithm is a method for weak-learning when OPT =
O(1/n^{1/3+\alpha}), which can then be fed into existing agnostic
boosting procedures to achieve the desired guarantee.
This is joint work with Avrim Blum and Or Sheffet.
This is joint work with Avrim Blum and Or Sheffet.