Computational Photography, project 4

Severin Hacker, ETH/CMU

Problem description

Implement the MOPS paper.

Algorithm

  1. Detecting corner features in an image: use Harris Detector and Adaptive Non-Maximal Suppresion, keep 500 best corners
  2. Extracting a Feature Descriptor for each feature point
  3. Matching these feature descriptors between two images: use the Lowe matching, keep the best matches
  4. Use a robust method (RANSAC) to compute a homography: 100 RANSAC cycles
  5. Proceed as in Project 3 to produce a mosaic: I use simple feathering

Results

Detecting corner features

Our Harris corners:

After ANMS:

Feature Descriptor

Here is a sample patch for the pixel at one end of the roof:

Best matches with Lowe

Final Mosaic

Bells and Whistles

Rotation Invariance

The same patch as before but now using the rotation-invariant feature descriptor

The mosaic with the rotation-invariant feature descriptor

Discussion

As one can see from the results, the mosaic using the rotation-invariant feature descriptor is pretty much the same thing as with the normal feature descriptor. Why is that (assuming my implementation is bug-free ;-))? Maybe, my picture is a bad picture for showing the superiority of the rotation-invariant (RI) descriptor. Maybe, the RI descriptor "wanted" to change a lot but could not do so because of the robustness of the RANSAC step that maybe just keeps the same matches as in the non-RI case. Maybe, RI is just unnecessary for getting good mosaics...