StereoVision System with OpenCv, disparity and depth map PYTHON

Question:

I am working on a StereoVision project. I set up my stereo camera, shot a picture (it’s 2 parallel matrix cameras), then I read the openCV documentation, tried out the examples and other datasets and it seems like it is working just fine. On the other hand with my pictures the disparity image is a mess. I tried it with BM and SGBM method as well. The main question is if anyone had this type of problem before, is our camera set up bad, or am I just missing something important?
I attach my code and pictures.

import cv2
import numpy
import numpy as np
from matplotlib import pyplot as plt

left = cv2.imread("../JR_Pictures/JR_1_Test_left.bmp", cv2.IMREAD_GRAYSCALE)
right = cv2.imread("../JR_Pictur`enter code here`es/JR_1_Test_right.bmp",cv2.IMREAD_GRAYSCALE)

left = cv2.resize(left, (0, 0), None, 0.5, 0.5)
right = cv2.resize(right, (0, 0), None, 0.5, 0.5)

fx = 942.8  # 50  # 942.8  # lense focal length
baseline = 58.0  # distance in mm between the two cameras
disparities = 128  # num of disparities to consider
block = 13  # block size to match
units = 0.512  # depth units, adjusted for the output to fit in one byte
sbm = cv2.StereoBM_create(numDisparities=disparities,
                      blockSize=block)
left_matcher = cv2.StereoBM_create(numDisparities=disparities, blockSize=block)
wlsFilter = cv2.ximgproc.createDisparityWLSFilter(left_matcher)
right_matcher = cv2.ximgproc.createRightMatcher(left_matcher)
disparityL = left_matcher.compute(left, right)
disparityR = right_matcher.compute(left, right)

sigma = 1.5
lmbda = 32000.0

wls_filter = cv2.ximgproc.createDisparityWLSFilter(left_matcher);
wls_filter.setLambda(lmbda);
wls_filter.setSigmaColor(sigma);

filtered_disp = wls_filter.filter(disparityL, left, disparity_map_right=disparityR);

# calculate disparities
disparity = sbm.compute(left, right)
numpy_horizontal = np.hstack((left, right))
hori = np.hstack((disparityL, filtered_disp))
cv2.imshow('HorizontalStack1', numpy_horizontal)
cv2.imshow('HoriStack2', hori)
cv2.waitKey(0)
valid_pixels = disparity > 0

# calculate depth data
depth = numpy.zeros(shape=left.shape).astype("uint8")
depth[valid_pixels] = (fx * baseline) / (units * disparity[valid_pixels])

# visualize depth data
depth = cv2.equalizeHist(depth)
colorized_depth = numpy.zeros((left.shape[0], left.shape[1], 3), dtype="uint8")
temp = cv2.applyColorMap(depth, cv2.COLORMAP_JET)
colorized_depth[valid_pixels] = temp[valid_pixels]
plt.imshow(colorized_depth)
plt.show()

I tried out several codes from Github,Stackoverflow,OpenCv tutorials but none of them worked well, so i thought the problem is with out camera or with out image.I had to downscale them, because it was BMP fileformat and i cannot upload it to stackoverflow 😀

So, these are my left and right raw images.

Left Pic, Right Pic:

LeftPic RightPic

And my DisparityRaw,Filtered, and calculated height map.

enter image description here

If I missed any information let me know, and thanks for help.

Asked By: Botaa

||

Answers:

A couple of things are missing. stereo_BM is not magic and doesn’t do everything for you.

As I already wrote here, you need to have a calibrated system, where all the intrinsic and extrinsic parameters of the stereo rig are known.
Did you calibrate your system? How did you end up with those values of fx and baseline?
Are you using a stereo rig or those are simply two images done with the same camera?

Why do we need calibration?

First, look at your images: they are not rectified! Rectified images have corresponding points on a horizontal line. Rectification can be done only if you have a calibrated system.
As you may see from the bottom corner of the book, it is not aligned (different height in left and right).

Secondly, you are not considering lens distortion that can be quite big on common cameras.

Then, to calculate depth you need the baseline information.

I encourage you to give it a try.
You can find my code to build the depth map here, you may join it with other examples to create your own system.

Here is how I do calibration instead. Good luck.

Answered By: decadenza

I would like to know how to be able to update the extrinsic parameters because the two cameras are installed on a robot and the robot is mobile so the position in space of the two cameras changes

Answered By: najib