Large margin linear classification methods have been successfully
applied to many applications. For a linearly separable problem, it has been
shown that under appropriate assumptions, the expected misclassification
error of the computed ``optimal hyperplane'' approaches zero
at a rate proportional to the inverse training sample size.
This rate is usually characterized by the margin and the maximum
norm of the input data. In this paper, we argue that another quantity,
namely the robustness of the input data distribution, also plays an important
role in characterizing the convergence behavior of expected misclassification
error. Based on this concept of robustness, we show that the expected
misclassification error can converge exponentially in the training sample size.
We have attempted to ensure that all information is correct, but we cannot
guarantee it. Please send website-related comments and corrections to:
(for webpage content-related issues)
Alexander Gray,
Carnegie Mellon University, or
(for server-related or script-related issues)
Xin Wang,
Oregon State University.