OpenCV findContours() detects contours only if the image is saved and read beforehand
Question:
Providing some context:
I am trying to get the number of boxes in this image,
I have the above image stored in an ndarray blank_img.
If I run the following:
v = np.median(blank_img)
sigma = 0.33
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edges = cv2.Canny(blank_img, lower, upper, 3)
edges = cv2.dilate(edges, np.ones((2, 2), dtype=np.uint8))
cnts= cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
len(cnts) is equal to 1.
But if i save the array blank_img before the canny edge detection and the findContours, as follows:
cv2.imwrite('boxes.jpg', blank_img)
blank_img = cv2.imread('boxes.jpg')
Now when i run my findContour code snippet, len(cnts) is > 1. I would like to know how to fix it so that I don’t need to have an image saving overhead.
~~EDIT~~
Find the following code which I used to create blank_img
blank_img = 255*np.zeros((height, width, 3), np.uint8)
for line in lines:
x1, y1, x2, y2 = line
cv2.line(blank_img,(x1,y1),(x2,y2),(255,255,255),1)
Where lines
is a list of lines returned by HoughLinesP:
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, None,minlinelength,maxlinegap)
Answers:
Here lies the problem:
See this image:
The small green independent rectangle at the bottom left corner is created when you save the image and read it again. The actual length of cnts
should be 1(the outer big rectangle) only.
This small rectangle is appearing in the Canny edge image after you store the image and again read it. I don’t know the reason why this is being detected when you store the image and then read it. Just remove the 2 lines and use the image directly only.
Providing some context:
I am trying to get the number of boxes in this image,
I have the above image stored in an ndarray blank_img.
If I run the following:
v = np.median(blank_img)
sigma = 0.33
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edges = cv2.Canny(blank_img, lower, upper, 3)
edges = cv2.dilate(edges, np.ones((2, 2), dtype=np.uint8))
cnts= cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
len(cnts) is equal to 1.
But if i save the array blank_img before the canny edge detection and the findContours, as follows:
cv2.imwrite('boxes.jpg', blank_img)
blank_img = cv2.imread('boxes.jpg')
Now when i run my findContour code snippet, len(cnts) is > 1. I would like to know how to fix it so that I don’t need to have an image saving overhead.
~~EDIT~~
Find the following code which I used to create blank_img
blank_img = 255*np.zeros((height, width, 3), np.uint8)
for line in lines:
x1, y1, x2, y2 = line
cv2.line(blank_img,(x1,y1),(x2,y2),(255,255,255),1)
Where lines
is a list of lines returned by HoughLinesP:
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, None,minlinelength,maxlinegap)
Here lies the problem:
See this image:
The small green independent rectangle at the bottom left corner is created when you save the image and read it again. The actual length of cnts
should be 1(the outer big rectangle) only.
This small rectangle is appearing in the Canny edge image after you store the image and again read it. I don’t know the reason why this is being detected when you store the image and then read it. Just remove the 2 lines and use the image directly only.