Need help in understanding error for cv2.undistortPoints()

Question:

I am trying to triangulate points from a projector and a camera using Structured Light in OpenCV Python. In this process I have a list of tuples that match one to one between the camera and the projector. I am passing this to cv2.undistortedPoints() as below:

camera_normalizedPoints = cv2.undistortPoints(camera_points, camera_K, camera_d)

However, python is throwing the following error and I am unable to understand what the error means.

camera_normalizedPoints = cv2.undistortPoints(camera_points, camera_K, camera_d)
cv2.error: /home/base/opencv_build/opencv/modules/imgproc/src/undistort.cpp:312: error: (-215) CV_IS_MAT(_src) && CV_IS_MAT(_dst) && (_src->rows == 1 || _src->cols == 1) && (_dst->rows == 1 || _dst->cols == 1) && _src->cols + _src->rows - 1 == _dst->rows + _dst->cols - 1 && (CV_MAT_TYPE(_src->type) == CV_32FC2 || CV_MAT_TYPE(_src->type) == CV_64FC2) && (CV_MAT_TYPE(_dst->type) == CV_32FC2 || CV_MAT_TYPE(_dst->type) == CV_64FC2) in function cvUndistortPoints

Any help is greatly appreciated.

Thanks.

Asked By: Shubs

||

Answers:

The documentation is not always explicit about the input shape in Python unfortunately, and undistortPoints() doesn’t even have Python documentation yet.

The input points need to be an array with the shape (n_points, 1, n_dimensions). So if you have 2D coordinates, they should be in the shape (n_points, 1, 2). Or for 3D coordinates they should be in the shape (n_points, 1, 3). This is true for most OpenCV functions. AFAIK, this format will work for all OpenCV functions, while some few OpenCV functions will also accept points in the shape (n_points, n_dimensions). I find it best to just keep everything consistent and in the format (n_points, 1, n_dimensions).

To be clear this means an array of four 32-bit float 2D points would look like:

points = np.array([[[x1, y1]], [[x2, y2]], [[x3, y3]], [[x4, y4]]], dtype=np.float32)

If you have an array that has the shape (n_points, n_dimensions) you can expand it with np.newaxis:

>>> points = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
>>> points.shape
(4, 2)
>>> points = points[:, np.newaxis, :]
>>> points.shape
(4, 1, 2)

or with np.expand_dims():

>>> points = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
>>> points.shape
(4, 2)
>>> points = np.expand_dims(points, 1)
>>> points.shape
(4, 1, 2)

or with various orderings of np.transpose() depending on the order of your dimensions. For e.g. if your shape is (1, n_points, n_dimensions) then you want to swap axis 0 with axis 1 to get (n_points, 1, n_dimensions), so points = np.transpose(points, (1, 0, 2)) would change the axes to put axis 1 first, then axis 0, then axis 2, so the new shape would be correct.


If you think this is an odd format for points, it is if you only consider a list of points, but reasonable if you think about points as coordinates of an image. If you have an image, then the coordinates of each point in the image is defined by an (x, y) pair, like:

(0, 0)    (1, 0)    (2, 0)    ...
(0, 1)    (1, 1)    (2, 1)    ...
(0, 2)    (1, 2)    (2, 2)    ...
...

Here it makes sense to put each coordinate into a separate channel of a two-channel array, so that you get one 2D array of x-coordinates, and one 2D array of y-coordinates, like:

Channel 0 (x-coordinates):

0    1    2    ...
0    1    2    ...
0    1    2    ...
...

Channel 1 (y-coordinates):

0    0    0    ...
1    1    1    ...
2    2    2    ...
...

So that’s the reason for having each coordinate on a separate channel.


Some other OpenCV functions which require this format include cv2.transform() and cv2.perspectiveTransform(), which I’ve answered identical questions about before, here and here respectively.

Answered By: alkasm

I also reach this problems, and I take some time to research an finally understand.

In the open system, distort operation is before camera matrix, so the process order is:
image_distorted ->camera_matrix -> un-distort function->camera_matrix->back to image_undistorted.

So you need a small fix to and camera_K again.

camera_normalizedPoints = cv2.undistortPoints(camera_points, camera_K, camera_d, cv2.Mat.sye(3,3), camera_K)

Forumla: https://i.stack.imgur.com/nmR5P.jpg

Answered By: B.Blue