Suppose w is an n-letter random word in which each letter w_i is a random number from {1, ..., d} drawn according to a probability distribution p = (p_1, ..., p_d). How large do you expect w's longest increasing subsequence (LIS) to be? This is one of the most famous problems in combinatorics, its long history stretching back to a famous 1935 paper by Erdős and Szekeres and since then touching on areas such as random matrix theory, queueing theory, and representation theory. Surprisingly, many basic questions about LISes (and their generalizations given by the RSK algorithm of Robinson, Schensted, and Knuth) remain unknown.
In this paper, we give a tight bound on the expected value of a random word's LIS. In addition, we show tight bounds on the shape of a random word's "RSK diagram". We apply these new results to the problem of "quantum state learning": here, one is given a small number of "samples" of a quantum state, and the goal is to output a good estimate of the quantum state. This is a problem which is not only of theoretical interest, but is also commonly used in current-day implementation and verification of quantum technologies. Our main result here is the first sample-optimal algorithm for quantum state learning.
This is joint work with Ryan O'Donnell.
BIO: TBA