Keypoints – Detectors, Descriptors and Matching

Source code

In this project, I implemented an interest point detector and feature descriptor. Interest point detectors find particularly salient points in an image upon which we can extract a feature descriptor. SIFT, SURF and BRIEF are all examples of descriptors. In this case, we will be using BRIEF. Once we have extracted the interest points, we can use descriptors to match them between images to do neat things like panorama stitching or scene reconstruction.

BRIEF is one the simplest feature descriptors to implement. It has a very compact representation, is quick to compute, and has a discriminative yet easily computed distance metric. This allows for real-time computation, as you have seen in class. Most importantly,
as you will see, it is also just as powerful as more complex descriptors like SIFT, for many cases

Original image:model_chickenbroth

1. Keypoint Detector

Making keypoint detector can be achieved by:

1.1. Smoothing the grayed-image on certain level of scales (using Gaussian filter, creating Gaussian Pyramid)

The result from smoothing the image can be seen in the figure below (smoothing level 1,2,3,4,5 and 6)

Gaussian Pyramid

1.2. Obtaining the difference of each level of the smoothness (by subtracting each Gaussian Pyramid with the level underneath it, obtaining Difference of Gaussian / DoG) . So I subtracted Gaussian Pyramid level 2 to 1, level 3 to 2, and so on.

dog
1.3. After that I would like to know where is the edge or corner of the image. Hence I have to find the edge suppression by calculating the principal curvature of the DoG from point 2.

1.4. After getting the principal curvature, I calculated the extrema of that DoG and Principal Curvature to find which points are the most important for the image. Hence I got this points:

features

 

2. BRIEF Descriptor

After getting the keypoints about the most informative points in the image, I computed the descriptors to match the same points in the different image.

chickenbroth_01

 

First, we have to make a set of BRIEF test by making a patch of 9×9 pixels, and then compare the 256 pair of pixels within that patch with the same pattern. After that, I coukd get the BRIEF descriptor to match with the previous image.

By computing the descriptor for each image and find a match, The result then can be achieved in the figure below.

matching

reference :

[1] P. Burt and E. Adelson. The Laplacian Pyramid as a Compact Image Code. IEEE Transactions on Communications, 31(4):532–540, April 1983.
[2] Michael Calonder, Vincent Lepetit, Christoph Strecha, and Pascal Fua. BRIEF : Binary Robust Independent Elementary Features .
[3] David G. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2):91–110, November 2004.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>


by Bliss Drive Review