How can I contour low-contrast objects in python?

Question:

I am having difficulties contouring this type of low-contrast objects:

a low-contrast plagioclase

Where I aim for an output such as:

enter image description here

In the example above I used cv2.findContours with a code as the one below, but using a threshold value of 105 ret,thresh = cv.threshold(blur, 105, 255, 0). However, if I reproduce it for the low-contrast image, I fail to find an optimum threshold value:

import numpy as np
from PIL import Image
import requests
from io import BytesIO
import cv2 as cv

url = 'https://i.stack.imgur.com/OeZJ9.jpg'
response = requests.get(url)

img = Image.open(BytesIO(response.content)).convert('RGB')
img = np.array(img) 

imgray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)

blur = cv.GaussianBlur(imgray, (105, 105), 0)
        
ret,thresh = cv.threshold(blur, 205, 255, 0)
im2, cnts, hierarchy = cv.findContours(thresh,cv.RETR_TREE,cv.CHAIN_APPROX_SIMPLE)
cv.drawContours(img, cnts, -1, (0,0,255), 5)
plt.imshow(img, cmap = 'gray')

which outputs:
contour not achieved

I understand that the problem is that the intensity of the background and the object overlap, but I can’t find any other successful method. Other things I’ve tried include:

  1. Thresholding, in skimage with skimage.measure.find_contours.
  2. Watershed algorithm, in opencv.
  3. Eroding and dilating in opencv, which lowers too much the contour resolution.

I would appreciate help to contour, with as much resolution as possible, this object with low contrast respect to the background.

Asked By: db_max

||

Answers:

In order to solve your problem I would go with this snippet which detects contours and filters them on their area, leaving only the ones that are greater than a given size. In your case I’m assuming you are searching for an object only but I left the code ready to be extended to pictures with multiple ones

import cv2
import numpy as np


# input image
path = "16.jpg"

# finding contours
def getContours(img, imgContour):    

    contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    
    finalContours = []
    
    # for each contour found
    for cnt in contours:
        # find its area in pixel^2
        area = cv2.contourArea(cnt)
        print("Contour area: ", area)

        # fixed assuming you are searching for the biggest object
        # value can be found via previous print
        minArea = 18000
        
        if (area > minArea):

            perimeter = cv2.arcLength(cnt, False)
            
            # smaller epsilon -> more vertices detected [= more precision]
            # improving bounding box precision - original value 0.02 * perimeter
            epsilon = 0.002*perimeter
            # check how many vertices         
            approx = cv2.approxPolyDP(cnt, epsilon, True)
            print(len(approx))
            
            finalContours.append([len(approx), area, approx, cnt])

    # leaving this part if you have more objects to detect
    # not needed when minArea has been chosen to detect only one object
    # sorting the final results in descending order depending on the area
    finalContours = sorted(finalContours, key = lambda x:x[1], reverse=True)
    print("Final Contours number: ", len(finalContours))
    
    for con in finalContours:
        cv2.drawContours(imgContour, con[3], -1, (0, 0, 255), 3)

    return imgContour, finalContours

 
# sourcing the input image
img = cv2.imread(path)
# img.shape gives back height, width, color in this order
original_height, original_width, color = img.shape 
print('Original Dimensions : ', original_width, original_height)

# resizing to see the entire image
scale_percent = 30
width = int(original_width * scale_percent / 100)
height = int(original_height * scale_percent / 100)
print('Resized Dimensions : ', width, height)

dim = (width, height)
# resize image
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
cv2.imshow("Starting image", resized)
cv2.waitKey()

# blurring
imgBlur = cv2.GaussianBlur(resized, (7, 7), 1)
# graying
imgGray = cv2.cvtColor(imgBlur, cv2.COLOR_BGR2GRAY)

# inizialing thresholds
threshold1 = 14
threshold2 = 17

# canny
imgCanny = cv2.Canny(imgGray, threshold1, threshold2)
# showing the last produced result
cv2.imshow("Canny", imgCanny)
cv2.waitKey()

kernel = np.ones((2, 2))
imgDil = cv2.dilate(imgCanny, kernel, iterations = 3)
imgThre = cv2.erode(imgDil, kernel, iterations = 3)

imgFinalContours, finalContours = getContours(imgThre, resized)

# show the contours on the unfiltered resized image
cv2.imshow("Final Contours", imgFinalContours)
cv2.waitKey()
cv2.destroyAllWindows()

The final output you get running this with the chosen values is the following:

snippet_output

Answered By: Antonino

What has been proposed here

Contour detection by color gradient changes (See Antonino’s reply)

Contouring objects with low contrast respect to the background is not a trivial task. Although Antonino’s snippet gets close to contouring, it is not enough for contour detection:

  • The finalContours is not a single contour line, but an array of unclear lines, even if using the best possible parameters (see below):
    enter image description here

  • To find the best possible parameters, I used the pseudocode below, which outputs thousands of images that were visually categorised (see output image). However, none of the combination of the possible parameters was successful, i.e. outputted the desired contour:

     for scale_percent in range(30,51,5):
         for threshold1 in range(5, 21):
             for threshold2 in range(10,31):
                 for gauss_kernel in range(1,11,2):
                     for std in [0,1,2]:
                         for kernel_size in range(2,6):
                             for iterations_dialation in [2,3]:
                                 for iterations_erosion in [2,3]:
                                     for img in images:
                                         name = img[3:]
                                         img = cv2.imread('my/img/dir'+img)
    
                                         original_height, original_width, color = img.shape 
                                         width = int(original_width * scale_percent / 100)
                                         height = int(original_height * scale_percent / 100)
    
                                         dim = (width, height)
                                         resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
    
                                         imgBlur = cv2.GaussianBlur(resized, (gauss_kernel, gauss_kernel), std)
    
                                         imgGray = cv2.cvtColor(imgBlur, cv2.COLOR_BGR2GRAY)
    
                                         imgCanny = cv2.Canny(imgGray, threshold1, threshold2)
    
                                         plt.subplot(231),plt.imshow(resized), plt.axis('off')
                                         plt.title('Original '+ str(name))    
    
                                         plt.subplot(232),plt.imshow(imgCanny,cmap = 'gray')
                                         plt.title('Canny Edge-detectorn thr1 = {}, thr2 = {}'.format(threshold1, threshold2)), plt.axis('off')
    
                                         kernel_s = (kernel_size, kernel_size)
                                         kernel = np.ones(kernel_s)
    
                                         imgDil = cv2.dilate(imgCanny, kernel, iterations = iterations_dialation)
                                         plt.subplot(233),plt.imshow(imgDil, cmap = 'gray'), plt.axis('off')
                                         plt.title("Dilatedn({},{}) iterations = {}".format(kernel_size, kernel_size,
                                                                                             iterations_dialation))
    
                                         kernel_erosion = np.ones(())
                                         imgThre = cv2.erode(imgDil, kernel, iterations = iterations_erosion)
                                         plt.subplot(234),plt.imshow(imgThre, cmap = 'gray'), plt.axis('off')
                                         plt.title('Erodedn({},{}) iterations = {}'.format(kernel_size, kernel_size, 
                                                                                            iterations_erosion))
    
                                         imgFinalContours, finalContours = getContours(imgThre, resized)
    
                                         plt.subplot(235), plt.axis('off')
                                         plt.title("Contours")
    
                                         plt.subplot(236), plt.axis('off')
                                         plt.title('Contours')
    
                                         plt.tight_layout(pad = 0.1)
    
                                         plt.imshow(imgFinalContours) 
    
                                         plt.savefig("my/results/"
                                                     +name[:6]+"_scale_percent({})".format(scale_percent)+
                                                     "_threshold1({})".format(threshold1)
                                                    +"_threshold2({})".format(threshold2)
                                                    +"_gauss_kernel({})".format(gauss_kernel)
                                                    +"_std({})".format(std)
                                                    +"_kernel_size({})".format(kernel_size)
                                                    +"_iterations_dialation({})".format(iterations_dialation)
                                                    +"_iterations_erosion({})".format(iterations_erosion)
                                                    +".jpg")
                                         plt.title(name)
    
     images = ["b_36_2.jpg", "b_78_2.jpg", "b_51_2.jpg","b_72_2.jpg", "a_78_2.jpg", "a_70_2.jpg"]
     process_images_1(images)
    

which outputs:
enter image description here

The solution

Using a pretrained Deep Learning model

A preliminary idea was to use grabcut to train a model, but that would be very costly in terms of time. Therefore, pretrained Deep Learning models were the first shot. While some tools failed, this other tool outperformed any other method tried before (see image below). Hence, all the credit to the creator of the GitHub repository, extended to the creators of operating models (U^2-NET, BASNet). The https://github.com/OPHoperHPO/image-background-remove-tool doesn’t need any image preprocessing, contains a very straightforward documentation on how to deploy it, and even an executable google colab notebook. The output image is a png image with transparent background:
enter image description here
Hence, all it takes to find the contour is to isolate the alpha channel:

import cv2
import matplotlib.pyplot as plt, numpy as np

filename = '/a_58_2_pg_0.png'
image_4channel = cv2.imread(filename, cv2.IMREAD_UNCHANGED)
alpha_channel = image_4channel[...,-1]
contours, hier = cv2.findContours(alpha_channel, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)

for idx,contour in enumerate(contours):

        # create mask
        # zeros with same shape
        mask = np.zeros(alpha_channel.shape,np.uint8)
        
        # draw contour
        mask = cv2.drawContours(mask,[contour],-1,(255,255,255),-1) # -1 to fill the mask
        cv2.imwrite('/contImage.jpg', mask)
        plt.imshow(mask)

enter image description here

Answered By: db_max