In this project, I performed image warping and mosaicing through homography. They were used to create pamoramic images and rectification of flat surfaces in images.
For computing the homography matrix, I took advantage of numpys's least squares solver, by solving the minimum norm solution of a system of equations created from corresponding key points in the images, selected by the same correspondence tool used in project 3 here: https://cal-cs180.github.io/fa23/hw/proj3/tool.html
For N coordinates in the images, we create a system of equations, and stack them to solve Ah = b where is the homography matrix, and is matrix of coefficients of the equations, and b is the vector of known values. We solve the system of equations using np.linalg.lstsq. The homography matrix H is then reshaped to a 3x3 matrix, and the inverse of the matrix is returned.
To warp images, we multiply each point by the homography matrix and then scaling, iterating on every single pixel. Some calculations were made to allocate enough space in the image to fit the warped image.
I rectifyed images by marking points on the four corners of the surface I wanted to flatten. Then the Homography Matrix H was computed with a four points corresponding to a rectangle. Then, the image was warped using this Homography matrix
A set of corresponding points were selected in the two images. To merge two images, then, the homography matrix H was calculated from the points. The first image was warped to the second image using the homography matrix. Then a blank image was calculated to make sure it fit both images. The seam between the two images was then blended by feathering the edges of the second image, with a level that was manually chosen. The alphas channel was used to make sure it was blended correctly. The feathering was calculated by measuring the distance of the pixel to the edge of the image and creating a mask out of it. I attempted laplacian blending but I found this Approach to be much more intuitive and have a cleaner and smoother result.
I tried to get away with not needing to blend my images by taking better pictures that matched more. Even though I tried my hardest to minimize the seam through thoughful photo taking, there was still a slight visiable seam in the unblended result.
After a simple feather blending, the seam virtually disappears.
In this part, we try to stitch images together without manual correspondence points. An algorithm using Harris points is used to automatically find, filter, and match correspondence points.
First, all corners in the image are detected using the Harris corner detector, given in the spec as harris.py in get_harris_corners(). Then, the top portion of the corners with better scores were selected, so that more points would end up on corners rather than walls. I found that if more points ended up on walls, it interfered with the ANMS algorithm and places points not exactly at the corners, which made for bad correspondences. Then we perform the ANMS algorithm. The distances are calcuated using the given dist2(), and is masked whether xi < c_robust * xj is true. The min radii can be calculated for each point using the min of the masked distances, and then sorted on decreasing radii. The top corners are then selected, and I selected a constant of 1000.
We get the points selected by ANMS. We get the surrounding 40x40 reigon, which is then resized into a 8x8 feature. Then the feature is normalized. Some example feature descriptors.
For matching feature descriptors, we use the Lowe approach as described in the paper. I used a the Sum of Sqaured Differences SSD as the similariy metric along with Lowe's thresholding to filter out matches. The general idea is that we compute a ratio of similarities between 1st and 2nd nearest neighbors, and if the ratio is less than a threshold, we consider it a match as it is disinctive enough.
We perform the RANSAC algoritm. In this algoritm, there are 4 steps in an International. First, we sample 4 random key point pairs. Then we compute homography matrix H from those pairs. We then identify inlier pairs, where dist(Hp, p') < some constant threshold. We get the homography matrix that has the largest set of inliers. To ensure a more reliable result, I repeated this process 10000 times. This resulted in beautifully matched correspondence points, so satisfying.
All the previous part B steps were combined in one function to create a function that automatically stitches images together.
For this one, they look identical at first glance, but a zoom into the trees for the manual stiching sees some double ghosting in the trees. In the auto stitch, this ghosting is largly minimized.
For this one, for some reason Autostitching needed exorbitant amounts of time to run to have a good result. To remedy this, I had to downsize the image significantly, but the resulting matching still looks good.
City was a replacement mosaic for Lake, which turned out to be failure. There was not enough overlap and details in the overlap for the algorithm to recognize similar points, so I could not get it to work despite playing with the values. When there is not enough overlap, it is still better to manually select correspondences.