Image Mosaics

Aaron Johnson, Computational Photography Fall 2007

For this project I started by warping an old picture of mine to see them from different angles. Here is the picture:

And now here it is with the poster plane parallel with the image plane:
And again with the desk plane parallel with the image plane:


Now onto the interesting stuff. Here are 2 pictures from my hotel in San Diego:


And here they are stiched together, using correspondance points I selected myself:


Pretty cool, how about a fountain?


And together:


Next I though i'd try taking images of a plane from different positions, where the plane is the ground and the posisitons are as I fly over them...


And together:


Next I moved to autostiching. This is a multi-step process, including:
And here are the results. Starting with the same harbor scene from before, here are the scene points:

And now the best of those points:

The local texture from one point:

And the correspondance points, red are all points that match, green are the most consistant set:

Here are those same points on the other image:

And finally, the complete mosaic:

I took a whole bunch more pictures from that hotel, and paired them off as other tests of the code, here they are now:








These were good images with pleanty of interesting points in them, which meant the software could pretty easily allign each pair. The fountain from before failed because most of the interesting points are not in the overlap region. The plane images did work though:


These were pretty good, but not perfect. Problems I noticed were: And after some experimenting here are some of the causes: So I decided to calibrate for my lense, since that should make everything match up better. I took some pictures of a checkerboard pattern:


Then ran the software from here, and got these results:
Hand selected corner points:

Compensated corner points:

Overall error:

Projection error:

Scatterplot of error:

Calculated extrinsic parameters:

And rectified image:

That was all good and fine, but would it work on other images? I found this old robot pic of mine where the vertical posts leave something to be desired:

And the undistorted version:

Definately better. But would it make a difference? I undistorted all the harbor pictures and re-ran the software, resulting in this:








Some of them look similar, but most look better I think. The only catch is that undistorting has a blurring effect so the overall image isnt as crisp. I should have undistorted before i reduced the resolution (as these were 8MP to start with...). The biggest gains can be seen along the edge of the roof below, and the bridge that got averaged out in the old version now has lined up and shown itself (see especially the third).

Annother thing you can do once you calibrate your camera is convert images to cylindrical coordinates and stich them together that way. The calibration software told me that my focal length was 672 pixels. Using that I can map the images using the following formula from a rectangular (x,y) to cylindrical (t=theta,h=height), where f=focal length, v= field of view in radians, w = width of picture in pixels :
t = atan2(x,f) * w / v
h = y * f / sqrt(x^2 +f^2)
or the inverse mapping is
x = f * tan(t * v / w)
y = h * sqrt(x^2 + f^2) / f
This results in an image that looks like this:

And mosaics that look like this:



etc...
You can even combine them into a larger one that looks like this:

Here image rectification is just as important, for example this is the first pair without removing the distortion first: