Compare similarity of images using OpenCV with Python

Question:

I’m trying to compare a image to a list of other images and return a selection of images (like Google search images) of this list with up to 70% of similarity.

I get this code in this post and change for my context

# Load the images
img =cv2.imread(MEDIA_ROOT + "/uploads/imagerecognize/armchair.jpg")

# Convert them to grayscale
imgg =cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# SURF extraction
surf = cv2.FeatureDetector_create("SURF")
surfDescriptorExtractor = cv2.DescriptorExtractor_create("SURF")
kp = surf.detect(imgg)
kp, descritors = surfDescriptorExtractor.compute(imgg,kp)

# Setting up samples and responses for kNN
samples = np.array(descritors)
responses = np.arange(len(kp),dtype = np.float32)

# kNN training
knn = cv2.KNearest()
knn.train(samples,responses)

modelImages = [MEDIA_ROOT + "/uploads/imagerecognize/1.jpg", MEDIA_ROOT + "/uploads/imagerecognize/2.jpg", MEDIA_ROOT + "/uploads/imagerecognize/3.jpg"]

for modelImage in modelImages:

    # Now loading a template image and searching for similar keypoints
    template = cv2.imread(modelImage)
    templateg= cv2.cvtColor(template,cv2.COLOR_BGR2GRAY)
    keys = surf.detect(templateg)

    keys,desc = surfDescriptorExtractor.compute(templateg, keys)

    for h,des in enumerate(desc):
        des = np.array(des,np.float32).reshape((1,128))

        retval, results, neigh_resp, dists = knn.find_nearest(des,1)
        res,dist =  int(results[0][0]),dists[0][0]


        if dist<0.1: # draw matched keypoints in red color
            color = (0,0,255)

        else:  # draw unmatched in blue color
            #print dist
            color = (255,0,0)

        #Draw matched key points on original image
        x,y = kp[res].pt
        center = (int(x),int(y))
        cv2.circle(img,center,2,color,-1)

        #Draw matched key points on template image
        x,y = keys[h].pt
        center = (int(x),int(y))
        cv2.circle(template,center,2,color,-1)



    cv2.imshow('img',img)
    cv2.imshow('tm',template)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

My question is, how can I compare the image with the list of images and get only the similar images? Is there any method to do this?

Asked By: leeeandroo

||

Answers:

I suggest you to take a look to the earth mover’s distance (EMD) between the images.
This metric gives a feeling on how hard it is to tranform a normalized grayscale image into another, but can be generalized for color images. A very good analysis of this method can be found in the following paper:

robotics.stanford.edu/~rubner/papers/rubnerIjcv00.pdf

It can be done both on the whole image and on the histogram (which is really faster than the whole image method). I’m not sure of which method allow a full image comparision, but for histogram comparision you can use the cv.CalcEMD2 function.

The only problem is that this method does not define a percentage of similarity, but a distance that you can filter on.

I know that this is not a full working algorithm, but is still a base for it, so I hope it helps.

EDIT:

Here is a spoof of how the EMD works in principle. The main idea is having two normalized matrices (two grayscale images divided by their sum), and defining a flux matrix that describe how you move the gray from one pixel to the other from the first image to obtain the second (it can be defined even for non normalized one, but is more difficult).

In mathematical terms the flow matrix is actually a quadridimensional tensor that gives the flow from the point (i,j) of the old image to the point (k,l) of the new one, but if you flatten your images you can transform it to a normal matrix, just a little more hard to read.

This Flow matrix has three constraints: each terms should be positive, the sum of each row should return the same value of the desitnation pixel and the sum of each column should return the value of the starting pixel.

Given this you have to minimize the cost of the transformation, given by the sum of the products of each flow from (i,j) to (k,l) for the distance between (i,j) and (k,l).

It looks a little complicated in words, so here is the test code. The logic is correct, I’m not sure why the scipy solver complains about it (you should look maybe to openOpt or something similar):

#original data, two 2x2 images, normalized
x = rand(2,2)
x/=sum(x)
y = rand(2,2)
y/=sum(y)

#initial guess of the flux matrix
# just the product of the image x as row for the image y as column
#This is a working flux, but is not an optimal one
F = (y.flatten()*x.flatten().reshape((y.size,-1))).flatten()

#distance matrix, based on euclidean distance
row_x,col_x = meshgrid(range(x.shape[0]),range(x.shape[1]))
row_y,col_y = meshgrid(range(y.shape[0]),range(y.shape[1]))
rows = ((row_x.flatten().reshape((row_x.size,-1)) - row_y.flatten().reshape((-1,row_x.size)))**2)
cols = ((col_x.flatten().reshape((row_x.size,-1)) - col_y.flatten().reshape((-1,row_x.size)))**2)
D = np.sqrt(rows+cols)

D = D.flatten()
x = x.flatten()
y = y.flatten()
#COST=sum(F*D)

#cost function
fun = lambda F: sum(F*D)
jac = lambda F: D
#array of constraint
#the constraint of sum one is implicit given the later constraints
cons  = []
#each row and columns should sum to the value of the start and destination array
cons += [ {'type': 'eq', 'fun': lambda F:  sum(F.reshape((x.size,y.size))[i,:])-x[i]}     for i in range(x.size) ]
cons += [ {'type': 'eq', 'fun': lambda F:  sum(F.reshape((x.size,y.size))[:,i])-y[i]} for i in range(y.size) ]
#the values of F should be positive
bnds = (0, None)*F.size

from scipy.optimize import minimize
res = minimize(fun=fun, x0=F, method='SLSQP', jac=jac, bounds=bnds, constraints=cons)

the variable res contains the result of the minimization…but as I said I’m not sure why it complains about a singular matrix.

The only problem with this algorithm is that is not very fast, so it’s not possible to do it on demand, but you have to perform it with patience on the creation of the dataset and store somewhere the results

Answered By: EnricoGiampieri

You are embarking on a massive problem, referred to as “content based image retrieval”, or CBIR. It’s a massive and active field. There are no finished algorithms or standard approaches yet, although there are a lot of techniques all with varying levels of success.

Even Google image search doesn’t do this (yet) – they do text-based image search – e.g., search for text in a page that’s like the text you searched for. (And I’m sure they’re working on using CBIR; it’s the holy grail for a lot of image processing researchers)

If you have a tight deadline or need to get this done and working soon… yikes.

Here’s a ton of papers on the topic:

http://scholar.google.com/scholar?q=content+based+image+retrieval

Generally you will need to do a few things:

  1. Extract features (either at local interest points, or globally, or somehow, SIFT, SURF, histograms, etc.)
  2. Cluster / build a model of image distributions

This can involve feature descriptors, image gists, multiple instance learning. etc.

Answered By: Pete

I wrote a program to do something very similar maybe 2 years ago using Python/Cython. Later I rewrote it to Go to get better performance. The base idea comes from findimagedupes IIRC.

It basically computes a “fingerprint” for each image, and then compares these fingerprints to match similar images.

The fingerprint is generated by resizing the image to 160×160, converting it to grayscale, adding some blur, normalizing it, then resizing it to 16×16 monochrome. At the end you have 256 bits of output: that’s your fingerprint. This is very easy to do using convert:

convert path[0] -sample 160x160! -modulate 100,0 -blur 3x99 
    -normalize -equalize -sample 16x16 -threshold 50% -monochrome mono:-

(The [0] in path[0] is used to only extract the first frame of animated GIFs; if you’re not interested in such images you can just remove it.)

After applying this to 2 images, you will have 2 (256-bit) fingerprints, fp1 and fp2.

The similarity score of these 2 images is then computed by XORing these 2 values and counting the bits set to 1. To do this bit counting, you can use the bitsoncount() function from this answer:

# fp1 and fp2 are stored as lists of 8 (32-bit) integers
score = 0
for n in range(8):
    score += bitsoncount(fp1[n] ^ fp2[n])

score will be a number between 0 and 256 indicating how similar your images are. In my application I divide it by 2.56 (normalize to 0-100) and I’ve found that images with a normalized score of 20 or less are often identical.

If you want to implement this method and use it to compare lots of images, I strongly suggest you use Cython (or just plain C) as much as possible: XORing and bit counting is very slow with pure Python integers.

I’m really sorry but I can’t find my Python code anymore. Right now I only have a Go version, but I’m afraid I can’t post it here (tightly integrated in some other code, and probably a little ugly as it was my first serious program in Go…).

There’s also a very good “find by similarity” function in GQView/Geeqie; its source is here.

Answered By: Schnouki

For a simpler implementation of Earth Mover’s Distance (aka Wasserstein Distance) in Python, you could use Scipy:

from keras.preprocessing.image import load_img, img_to_array
from scipy.stats import wasserstein_distance
import numpy as np

def get_histogram(img):
  '''
  Get the histogram of an image. For an 8-bit, grayscale image, the
  histogram will be a 256 unit vector in which the nth value indicates
  the percent of the pixels in the image with the given darkness level.
  The histogram's values sum to 1.
  '''
  h, w = img.shape[:2]
  hist = [0.0] * 256
  for i in range(h):
    for j in range(w):
      hist[img[i, j]] += 1
  return np.array(hist) / (h * w)

a = img_to_array(load_img('a.jpg', grayscale=True))
b = img_to_array(load_img('b.jpg', grayscale=True))
a_hist = get_histogram(a)
b_hist = get_histogram(b)
dist = wasserstein_distance(a_hist, b_hist)
print(dist)
Answered By: duhaime
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.