How to measure average thickness of labeled segmented image

Question:

I have an image and I’ve done some pre-processing on the that image. Below I showed my preprocessing:

img= cv2.imread("...my_drive...\image_69.tif",0)

median=cv2.medianBlur(img,13)
ret, th = cv2.threshold(median, 0 , 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel=np.ones((3,15),np.uint8)
closing1 = cv2.morphologyEx(th, cv2.MORPH_CLOSE, kernel, iterations=2)
kernel=np.ones((1,31),np.uint8)
closing2 = cv2.morphologyEx(closing1, cv2.MORPH_CLOSE, kernel)

kernel=np.ones((1,13),np.uint8)
opening1= cv2.morphologyEx(closing2, cv2.MORPH_OPEN, kernel,  iterations=2)

So, basically I used "Threshold filtering" , "closing" and "opening" and the result looks like this:

enter image description here

Please note that when I used type(opening1), I got numpy.ndarray. So the image at this step is numpy array with 1021 x 1024 size.

Then I labeled my image:

label_image=measure.label(opening1, connectivity=opening1.ndim)
props= measure.regionprops_table (label_image, properties=['label', "area", "coords"])

and the result looks like this

enter image description here

Please note that when I used type(label_image), I got numpy.ndarray. So the image at this step is numpy array with 1021 x 1024 size.

As you can see, currently the image has 6 labels. Some of these labels are short and small pieces, so I tried to keep top 2 label based on area

slc=label_image
rps=regionprops(slc)
areas=[r.area for r in rps]

id=np.argsort(props["area"])[::-1]
new_slc=np.zeros_like(slc)

for i in id[:2]:
    new_slc[tuple(rps[i].coords.T)]=i+1

Now the result looks like this:

enter image description here

It looks like I was successful in keeping 2 top regions (please note that by changing id[:2] you can select thickest white layer or thin layer). Now:

What I want to do: I want to find the average thickness of these two regions

Also, please note that I know each of my pixels is 314 nm

Can anyone here advise how I can do this task?

Original photo: Below I showed low quality of my original image, so you have better understanding as why I did all the pre-processing

enter image description here

you can also access the original photo here : https://www.mediafire.com/file/20h66aq83edy1h7/img.7z/file

Asked By: Ross_you

||

Answers:

  • Use Deskew to straighten up the image.

enter image description here

  • Then, count the pixels of each column of the color of the label you want to measure then divide it by the number of columns to get the average thickness
Answered By: Jabbar

Here is one way to do that in Python/OpenCV.

  • Read the input
  • Convert to gray
  • Threshold to binary
  • Get the contours and filter on area so that we have only the two primary lines
  • Sort by area
  • Select the first (smaller and thinner) contour
  • Draw it white filled on a black background
  • Get its skeleton
  • Get the points of the skeleton
  • Fit a line to the points and get the rotation angle of the skeleton
  • Loop over each of the two contours and draw them white filled on a black background. Then rotate to horizontal lines. Then get the vertical thickness of the lines from the average thickness along each column using np.count_nonzero() and print the value.
  • Save intermediate images

Input:

enter image description here

import cv2
import numpy as np
import skimage.morphology
import skimage.transform
import math

# read image
img = cv2.imread('lines.jpg')

# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]

# get contours
new_contours = []
img2 = np.zeros_like(thresh, dtype=np.uint8)
contour_img = thresh.copy()
contour_img = cv2.merge([contour_img,contour_img,contour_img])
contours = cv2.findContours(thresh , cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
    area = cv2.contourArea(cntr)
    if area > 1000:
        cv2.drawContours(contour_img, [cntr], 0, (0,0,255), 1)
        cv2.drawContours(img2, [cntr], 0, (255), -1)
        new_contours.append(cntr)

# sort contours by area
cnts_sort = sorted(new_contours, key=lambda x: cv2.contourArea(x), reverse=False)

# select first (smaller) sorted contour
first_contour = cnts_sort[0]
contour_first_img = np.zeros_like(thresh, dtype=np.uint8)
cv2.drawContours(contour_first_img, [first_contour], 0, (255), -1)

# thin smaller contour
thresh1 = (contour_first_img/255).astype(np.float64)
skeleton = skimage.morphology.skeletonize(thresh1)
skeleton = (255*skeleton).clip(0,255).astype(np.uint8)

# get skeleton points
pts = np.column_stack(np.where(skeleton.transpose()==255))

# fit line to pts
(vx,vy,x,y) = cv2.fitLine(pts, cv2.DIST_L2, 0, 0.01, 0.01)
#print(vx,vy,x,y)
x_axis = np.array([1, 0])    # unit vector in the same direction as the x axis
line_direction = np.array([vx, vy])    # unit vector in the same direction as your line
dot_product = np.dot(x_axis, line_direction)
[angle_line] = (180/math.pi)*np.arccos(dot_product)
print("angle:", angle_line)

# loop over each sorted contour
# draw contour filled on black background
# rotate
# get mean thickness from np.count_non-zeros
black = np.zeros_like(thresh, dtype=np.uint8)
i = 1
for cnt in cnts_sort:
    cnt_img = black.copy()
    cv2.drawContours(cnt_img, [cnt], 0, (255), -1)
    cnt_img_rot = skimage.transform.rotate(cnt_img, angle_line, resize=False)
    thickness = np.mean(np.count_nonzero(cnt_img_rot, axis=0))
    print("line ",i,"=",thickness)
    i = i + 1

# save resulting images
cv2.imwrite('lines_thresh.jpg',thresh)
cv2.imwrite('lines_filtered.jpg',img2)
cv2.imwrite('lines_small_contour_skeleton.jpg',skeleton )

# show thresh and result    
cv2.imshow("thresh", thresh)
cv2.imshow("contours", contour_img)
cv2.imshow("lines_filtered", img2)
cv2.imshow("first_contour", contour_first_img)
cv2.imshow("skeleton", skeleton)
cv2.waitKey(0)
cv2.destroyAllWindows()

Threshold image:

enter image description here

Contour image:

enter image description here

Filtered contour image:

enter image description here

Skeleton image:

enter image description here

Angle (in degrees) and Thicknesses (in pixels):

angle: 3.1869032185349733
line  1 = 8.79219512195122
line  2 = 49.51609756097561

To get the thickness in nm, multiply thickness in pixels by your 314 nm/pixel.

Answered By: fmw42

This can be done with various tools in scipy. Assume you have the image here:

I = PIL.Image.open("input.jpg")
img = np.array(I).mean(axis=2)
mask = img==255  # or some kind of thresholding
imshow(mask)   #note this is a binary image, the green coloring is due to some kind of rendering artifact or aliasing

image mask

If one zooms in they can see split up regions
enter image description here

To get around that we can dilate the mask

from scipy import ndimage as ni
mask1 = ni.binary_dilation(mask, iterations=2)
imshow(mask1)

enter image description here

Now, we can find connected regions, and find the top regions with the most pixels, which should be the two lines of interest:

lab, nlab = ni.label(mask1)
max_labs = np.argsort([ (lab==i).sum() for i in range(1, nlab+1)])[::-1]+1
imshow(lab==max_labs[0])

enter image description here

and imshow(lab==max_labs[1])
enter image description here

Working with the first line as an example:

from scipy.stats import linregress
y0,x0 = np.where(lab==max_labs[0])
l0 = linregress( x0, y0)
xi,yi =  np.arange(img.shape[3]), np.arange(img.shape[3])*l0.slope + l0.intercept
plot( xi, yi, 'r--')

enter image description here

Interpolate along this region at different y-intercepts and compute the average signal along each line

from scipy.interpolate import RectBivariateSpline
img0 = img.copy()
img0[~(lab==max_labs[0])] = 0  # set everything outside this line region to 0
rbv = RectBivariateSpline(np.arange(img.shape[0]), np.arange(img.shape[1]), img0)
prof0 = [rbv.ev(yi+i, xi).mean() for i in np.arange(-300,300)]  # pick a wide window here (-300,300), can be more technical, but not necessary
plot(prof0)

enter image description here

Use your favorite method to compute the FWHM of this profile, then multiply by your pixel-to-nanometers factor.

I would just use a Gaussian fit to compute fwhm

xvals = np.arange(len(prof0))
yvals = np.array(prof0)


def func(p, xvals, yvals):
    mu,var, amp = p
    model = np.exp(-(xvals-mu)**2/2/var)*amp
    resid = (model-yvals)**2
    return resid.sum()
from scipy.optimize import minimize
x0  = 300,200,255 # initial estimate of mu, variance, amplitude 
fit_gauss = minimize(func, x0=x0, args=(xvals, yvals), method='Nelder-Mead')

mu, var, amp = fit_gauss.x
fwhm = 2.355 * np.sqrt(var)

# display using matplotlib plot /hlines
plot( xvals, yvals)
plot( xvals, amp*np.exp(-(xvals-mu)**2/2/var) )
hlines(amp*0.5, mu-fwhm/2., mu+fwhm/2, color='r')
legend(("profile","fit gauss","fwhm=%.2f pix" % fwhm))

enter image description here

Finally, thickness=fwhm*314, or about 13 microns.

Following the exact same approach for the second line (lab==max_labs[1]) gives a thickness of about 2.2 microns:
enter image description here

Note, I was using interactive plotting to do this example, hence calls to imshow , plot etc. are meant motly as a reference to the reader. One may need to take extra steps to recreate the exact images I’ve uploaded (zooming etc).

Answered By: dermen
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.