Num | Date | Summary |
---|---|---|
19 | 12.Nov | We started our discussion of Computational Geometry today. We spent about half of the lecture on a configuration space algorithm for planning shortest 2D translational motions of a polygonal robot moving among polygonal obstacles. (Our core step involved pairs of convex polygons. The algorithm generalizes by thinking of an arbitrary polygon as the union of convex polygons.) Several computational geometry and computer science tools are required to implement that algorithm. We began our discussion of those today: We discussed edge-edge intersections. We mentioned degeneracies. (In order to reduce worry about degeneracies, one sometimes assumes that the input to a geometric algorithm is in "general position", meaning a generic situation that will occur with probability 1, relative to some geometric probability. In practice, one may wish to randomize input, perturbing coordinates by tiny amounts.) The class came up with two algorithms for deciding whether a point q lies within a (possibly nonconvex) polygon. One algorithm entails summing (signed) changes in angle of the vector from the point q to vertices of the polygon, as one moves around the polygon. This sum will be 0 if q lies outside the polygon and 2π if q lies within the polygon. Another algorithm entailed counting the number of intersections that a semi-infinite ray anchored at q makes with edges of the polygon. This number will be even if q lies outside the polygon and odd if q lies within the polygon. (Again, there are degeneracies which one needs to consider or ignore by a general position argument.) Next, we considered the point-in-polygon problem for the specific case of a convex polygon. For multiple-query (in which the convex polygon remains fixed while the query points vary), we observed that it is useful to first preprocess the polygon in order to represent the polygon as a collection of triangular wedges, meeting at an interior point. The natural sort of the wedges by angle around this interior point means that one can use binary search to find the key supporting line of the convex polygon against which to test a given point q (see "Additional Details" below). One may therefore perform each point-in-polygon test in O(log(n)) time rather than O(n) time. (Here n is the number of vertices or edges in the polygon.) Additional Details: Fix some point interior to a convex polygon (perhaps the centroid of the polygon's vertices). Then each edge of the polygon defines a triangle when combined with this interior point. If we extend these triangular wedges out to infinity, then generally exactly one wedge will include a given query point q (for some degenerate cases, two touching wedges may contain q, but that presents no difficulties). The "key supporting line" is the line containing the edge of the polygon within that wedge. If q is on one side of the key supporting line, then one may conclude that q lies within the polygon; if q is on the other side of the line, then q lies outside the polygon; and if q is right on the line, then it is on the boundary of the polygon. Performing such a point versus line test entails plugging point q's (x,y) coordinates into the expression ax+by+c, with ax+by+c=0 being the relevant line equation. The sign of the value ax+by+c decides the point's location. For instance, if one chooses the vector (a,b) to point away from the polygon's interior, then nonzero positive sign means point (x,y) lies outside the polygon. Motivated by binary search, we discussed an efficient multiple-query algorithm for determining the location of points within regions of a given planar subdivision. We preprocessed the subdivision into horizontal "slabs", chosen so as to contain no vertices in their interiors. We could then use binary search on the y-coordinate of a query point q to determine the slab containing q. Given the slab, we could then use another binary search to find the region containing q within the slab, and thus the overall region in the plane. This second binary search again performed comparisons by plugging q's coordinates into line equations (namely some of the lines within the relevant slab). We mentioned line-sweep as a tool for generating the slab decomposition described above. We mentioned the Euler characteristic for planar subdivisions. |
20 | 14.Nov | Today, we first reviewed some basic facts about convex sets in arbitrary finite dimensions. We also introduced the ideas of an extreme point and a supporting hyperplane. Then we discussed numerous algorithms for computing convex hull in two dimensions. Specifically: We developed an (inefficient) algorithm for computing the convex hull of a finite set of points in two-dimensional Euclidean space, based on the idea of finding extreme points. Given a finite set of points S, the algorithm first determines the extreme points of the convex hull of of S via triangle tests, then creates S's convex hull by sorting these points about their centroid. The algorithm has time complexity O(n4), with n the number of points in S. We then devised an efficient algorithm by essentially reversing the order of these steps. The algorithm sorts all the points of S about some interior point, then uses local convexity tests to retain only the extreme points. This efficient algorithm is called Graham's Scan. The algorithm has time complexity O(nlog(n)), which is asymptotically optimal. Subsequently, we discussed Jarvis's March, which is an algorithm that constructs the convex hull of a finite set of 2D points by finding the boundary edges of the hull, much like wrapping a string around the set of points. Again, we first discussed a very inefficient process for finding such edges, then a more efficient version. We observed that computing the convex hull of finitely many 2D points is equivalent to sorting finitely many numbers, if one is permitted linear time work to convert data for one problem into data for the other problem. The optimal complexity of comparison-based sorting is O(nlog(n)), so this equivalence tells us that computing the convex hull of finitely many 2D points has an optimal complexity of O(nlog(n)), with n being the number of points. We briefly mentioned Mergehull, inspired by a similar algorithm for sorting numbers. The key step in Mergehull is a linear time algorithm for computing the convex hull of the union of two convex polygons. We also briefly mentioned Dynamic Hull, an algorithm that can efficiently create a convex hull from an existing convex polygon and a new point. The algorithm can perform such an update in O(log(n)) time. Very quickly, near the end of lecture, we introduced Voronoi diagrams with a simple example. |
21 | 19.Nov | We began our discussion of Markov chains today. We defined closed and irreducible subsets of states. A closed subset is a subset of states such that no state outside the subset is reachable via the given Markov chain transitions. An irreducible subset is a closed set that contains no proper nonempty closed subsets. These definitions therefore allow us to define corresponding closed and irreducible subchains. In an irreducible subchain every state is reachable from every other state (not necessarily with probability 1 if the chain is infinite). We defined the semantics of the stochastic matrix P associated with a Markov chain. We observed that P always has eigenvalue 1. (In fact, the column vector consisting of 1s is a right eigenvector for eigenvalue 1.) For a finite chain, the multiplicity of eigenvalue 1 is equal to the number of recurrent classes in the Markov chain. (A recurrent class is an irreducible subchain in which every state is eventually reachable with probability 1 from every other state.) We classified states as periodic or aperiodic and as transient or persistent. For persistent states we further classified states into those with finite mean recurrence times and those with infinite mean recurrence times. (A persistent state with infinite recurrence time is called a null state. An aperiodic persistent state with finite mean recurrence time is called ergodic.) We stated that all states in an irreducible chain have the same type. We observed that in a finite chain not all states can be transient and that no persistent state can be a null state. A Markov chain that is irreducible and aperiodic has a stationary distribution, which we wrote as a row vector π. This vector is a left eigenvector of P, corresponding to eigenvalue 1: π = πP. The mean recurrence time μi of any state in such a Markov chain i is the inverse of its steady-state probability πi. In other words, μi = 1/πi. For a finite but reducible Markov chain, there may be several dimensions of such left eigenvectors, one for each recurrent class. The left eigenvector associated with a recurrent class describes the stationary distribution for that recurrent class. The stochastic matrix P of such a Markov chain also has a right eigenvector (column vector) for each occurrence of the eigenvalue 1, i.e., for each recurrent class. The eigenvector may be so chosen that its ith component is the probability of eventually entering the recurrent class from state i. Finally, very quickly, we considered a finite chain with two periodic recurrent classes. We observed that the eigenvalues of the stochastic matrix contain roots of unity, measuring the periodicities of the recurrent classes. |