If a seller wishes to sell an item to a single buyer, she faces a problem of how to optimize the money she makes from this transaction. With perfect information about the buyer, she could sell the item to the buyer at exactly the buyer's value, earning the maximum revenue any seller could ever hope to make from selling to this buyer. However, the seller of an item almost never has such perfect information about each potential buyer. With no information about the buyer's value, the seller cannot hope to achieve any approximation to this pointwise optimal revenue. For this reason, the classical auction theory literature (starting with Myerson in 1981) has made the assumption that the seller knowns some distribution D_i from which each buyer i draws his private value v_i, and searches for an auction which optimizes the revenue achieved in expectation over the draws from these distributions.
If we wish to use such a Bayesian approach, we should be sure we know that we have enough data to justify our optimization. In general, the question of "How much data do we need about buyers to sell to them optimally?" depends heavily upon which set of selling procedures we are deciding between, just as the sample complexity of learning problems depends heavily upon the class one is learning over. We leverage tools from learning theory to both understand the sample complexity of several known auction formats and to develop a new auction format which sits in the "sweet spot" between being expressive enough to approximately optimize revenue, and simple enough so that the sample complexity of revenue maximization is polynomial.
Joint work with Tim Roughgarden.