15-451 Algorithms 01/21/09 RECITATION NOTES Problem 1: Here is a problem related to lecture: what is the expected length of the leftmost branch in the quicksort recursion tree? In other words, you start with a set S of n distinct numbers and then repeat the following until they are all gone: 1. pick a random element p in S. 2. throw out p and all numbers > p from S. What is the expected number of times we repeat this? A first-cut analysis suggests log_2(n) because each p will throw out half the remaining numbers on average. But technically this is not a legal argument. WHY NOT? Because average(len(n')) may not be the same as len(average(n')). And in fact, the answer is really H_n, or about ln(n). Here's a way to prove the correct bound. We are interested in the expected number of pivots ever chosen in step 1. So, let X_i be an indicator RV for the event that the ith smallest element is ever chosen. E.g., the smallest element will for sure be chosen eventually, so E[X_1]=1. How about the 2nd smallest? Can anyone give an argument why E[X_2] = 1/2? More generally, why is E[X_i] = 1/i? Answer: one way to think about this is like the dart game from lecture. Game ends when dart hits some number in {1,...,i} and the chance that this number is i is 1/i. Another way is to imagine we implement step 1 by randomly permuting the elements and then choosing them in that order as pivots, but passing over any elements that are > some pivot chosen so far. Then, the problem boils down to asking: "in a random permutation of 1...n, what is the chance that the element i comes before all of {1,...,i-1}?" Well, since it's a random permutation, each element of {1,...,i} is just as likely as any other to come first, so the answer is 1/i. ----------------------------------------------------------------------------- Problem 2: BACKWARDS-ANALYSIS OF QUICKSORT. Here's a kind of bizarre way of analyzing quicksort: look at the algorithm backwards. (This is in the lecture notes). Actually, to do this analysis, it is better to think of a version of Quicksort that instead of being recursive, at each step it picks a random element of all the non-pivots so far, and then breaks up whatever bucket that element happens to be in. This is easiest to think of if you imagine doing the sorting "in place" in the array. If you think about it, this way of thinking is just rearranging the order in which the work happens, but it doesn't change the number of comparisons done (I mean, you probably wouldn't want to implement the algorithm this way because you'd have to keep track of which elements are previous pivots etc). Maybe do an example so that people are comfortable with this version of the algorithm and that it really is doing the same work but in a different order. The reason this version is nice is that if you imagine watching the pivots get chosen and where they would be on a sorted array, they are coming in a completely random permutation. Looking at the algorithm run backwards, at a generic point in time, we have $k$ pivots (producing $k+1$ buckets) and we ``undo'' one of our pivot choices at random, merging the two adjoining buckets. (Remember, the pivots are coming in a completely random order, so, viewed backwards, the most recent pivot is equally likely to be any of the k.) Now, the cost for an undo operation is the sum of the sizes of the two buckets joined (since this was the number of comparisons needed to split them). Notice that for each undo operation, if you sum the costs over all of the $k$ possible pivot choices, you count each bucket twice (or just once if it is the leftmost or rightmost bucket) and get a total of $< 2n$. Since we are picking one of these $k$ possibilities at random, the {\em expected} cost for this undo operation is at most $2n/k$. So, adding up over all undo operations, we get $\sum_k 2n/k = 2n H_n$. This is pretty slick, and not the sort of thing you'd think of trying off the top of your head, but it turns out to be useful in analyzing related algorithms in computational geometry.