Tuesday, September 27, 2016. 12:00PM. NSH 3305.
Ellen Vitercik - Foundations of application-specific algorithm selection
In many scientific domains, the underlying computational problems are frequently NP-hard, but a wealth of algorithms exist which efficiently find approximate solutions. These algorithms are often defined by a number of tunable parameters, which introduce an infinite spectrum of algorithm options. Which algorithm and choice of parameters will lead to the best performance for the specific application at hand? One could compare the worst-case performance of the algorithms in question and choose the algorithm with the best worst-case performance. However, worst-case instances may not occur in practice, which means that this technique will not offer guidance specific to the application at hand. This problem, often referred to as application-specific algorithm selection, has received significant attention from an empirical perspective over the past several decades, but it is not yet structured on a firm theoretical foundation. In this work, we develop learning algorithms for application-specific algorithm selection with robust guarantees. We focus on two large families of algorithms, the first of which is an infinite set of clustering algorithms based on hierarchical clustering followed by dynamic programming. The second family we analyze is a rich parameterized class of integer quadratic programming approximation algorithms based on SDP rounding. We work within a widely applicable framework wherein the learning algorithm is given samples from an application-specific distribution over problem instances and learns an algorithm that performs well over the distribution. We provide strong sample complexity guarantees and efficient algorithms which optimize the parameters to best suit typical inputs from a particular application.
This is joint work with Nina Balcan, Vaishnavh Nagarajan, and Colin White.