We introduce a simple new algorithm for improper, agnostic learning of halfspaces with normally distributed inputs, a problem closely related to learning sparse parities with noise. It uses exponentially less data than previous algorithms, particularly ones based on fitting polynomials. The lynchpin of this algorithm is a new hypothesis class called smooth lists of halfspaces. They are more flexible than halfspaces, but do not require more data to train in the worst case. They have a variety of useful algebraic properties, and may be analyzed with the Laplace transform or in reproducing kernel Hilbert space. These new classifiers enable an iterative approach which is fundamentally different than update rules such as perceptron and multiplicative weights. The algorithm is very practical and achieves good experimental performance for both natural and artificial problems. Joint work with Geoff Gordon.
BIO: Shiva Kaul is a PhD candidate in the Computer Science Department at Carnegie Mellon University. He studies machine learning, especially classification algorithms, in collaboration with Geoffrey Gordon, members of the Machine Learning Department, and CERT. Previously, he completed a M.S. in Computer Science under the supervision of Mahadev Satyanarayanan, and internships at Microsoft Research Redmond and Adobe Labs.