camera_calibration icon indicating copy to clipboard operation
camera_calibration copied to clipboard

How to combine with parametric model

Open yingweima2022 opened this issue 3 years ago • 7 comments

Hello, through issue # 19 and # 22, I learned that distortion can be removed through the general model or applied to the colmap parameter model in combination with BA.

I also want to know how to combine the calibration result of OpenCV calibration function? For example, how to apply it to robot hand-eye calibration.

yingweima2022 avatar Dec 23 '20 06:12 yingweima2022

I also encountered this problem. Have you solved it?

StarryPath avatar Jul 16 '21 11:07 StarryPath

@puzzlepaint I'm also baffled by this question. When I enter a pixel coordinate,how can I get the coordinates of the pixel undistorted by the general model? Such as the opencv function "undistortPoints".

StarryPath avatar Jul 16 '21 11:07 StarryPath

Based on the description of undistortPoints here, it seems to first unproject the given input pixels and then optionally apply some further steps (with R and P) that are independent from the original camera model.

The corresponding unprojection function in the generic camera model implementation, for example in the model in camera_calibration/applications/camera_calibration/generic_models/src/central_generic.h, would be this one: https://github.com/puzzlepaint/camera_calibration/blob/846bb5282a23b68e9fd7f3d240f7c979a016dea6/applications/camera_calibration/generic_models/src/central_generic.h#L418

Regarding the question in the first post of this GitHub issue, I don't understand what exactly the question asked for.

puzzlepaint avatar Jul 16 '21 17:07 puzzlepaint

@puzzlepaint Thank you for your reply. When I use the pinhole model parameter matrix multiply d (d calculated by opencv), I can get the coordinates of the pixel undistorted.

 camera_matrix1 = np.array([[3.54146051e+03, 0.00000000e+00, 2.02736891e+03],
          [0.00000000e+00 ,3.54195528e+03, 1.52639175e+03],
         [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])

         dist_coeffs1 = np.array(
   [-0.09569434,  0.10140725 , 0.00025152, -0.000199 ,  -0.0185507 ])

       src1=np.array([[[1000   ,500]]],np.float32)

        dst3=cv2.undistortPoints(src1,camera_matrix1,dist_coeffs1)

        tmp=np.array([dst3[0][0][0], dst3[0][0][1],1 ],np.float32)

        pixel=np.dot(camera_matrix1,tmp.T) #the result of pixel [985.92114981 485.65509851   1.        ]

But when I use the pinhole model parameter matrix multiply d (d calculated by general model), I can not get the coordinates of the pixel undistorted.

camera_matrix1 << 3.54146051e+03, 0.00000000e+00, 2.02736891e+03,
		0.00000000e+00, 3.54195528e+03, 1.52639175e+03,
		0.00000000e+00, 0.00000000e+00, 1.00000000e+00;

       p2 <<1000	,500;

	camera.Unproject(p2, &d);

       pixel = camera_matrix1 * d/d[2]; //the result of pixel[975.902 ,522.557,  1]

How can I solve this problem?

StarryPath avatar Jul 19 '21 03:07 StarryPath

Sorry, but I don't really understand what the problem is. What exactly would you like to achieve?

Do you think that the calculation should return the same value in both cases, in case that both camera models have been calibrated for the same camera? If so, that is not necessarily the case. The central-generic model's observation directions can for example be arbitrarily rotated while rotating all camera poses by the same amount in the opposite direction in order to cancel out the effect. Doing this does not change the shape of the camera intrinsics, however, it will change the directions returned by camera.Unproject(p2, &d);. If you take these directions and project them to a pinhole image that is arbitrarily defined to look towards the z direction, then the results will differ even though the camera remained the same. While the generic camera calibration program does try to apply a canonical orientation to the generic calibrations, this is not necessarily the same as what is obtained with a different camera model.

puzzlepaint avatar Jul 21 '21 20:07 puzzlepaint

@puzzlepaint Thank you very much. I'm very happy to see your reply, and I've been waiting for it all the time.
I want to use the central-generic model for stereo camera to complete the measurement task. I have high requirements for the measurement accuracy. After reading your paper, I think the generic model may have higher accuracy. However, I can't determine the focal length and optical center position of the camera through calibration like using the parameter model, and I can't complete the measurement. I want to know how to determine the optical center position of the camera of the central-generic model. Besides I don't understand your this sentence, " The central-generic model's observation directions can for example be arbitrarily rotated while rotating all camera poses by the same amount in the opposite direction in order to cancel out the effect.", is there any relevant reference, or please explain it in more detail. Thanks!

StarryPath avatar Jul 23 '21 08:07 StarryPath

The generic camera models do not require or use focal length and optical center parameters. These parameters are only necessary for parametric models that include them, and they are not required to do stereo matching. What do you think you need these parameters for?

Trying to elaborate on the statement about the rotation ambiguity:

  • Consider that you have a camera, calibrated with the central-generic model, that observes some 3D feature points. Taking a 3D feature point and projecting it to the camera image using the calibrated model gives a corresponding 2D observation point (pixel location) in the image.
  • Starting from this state, you can modify the camera calibration by rotating all of the generic models' observation directions around the camera origin by some amount. You further modify the camera pose by applying the exact opposite rotation to it. This gives you a modified state.
  • In this modified state, taking any 3D feature position and projecting it to the camera image using the modified calibration still returns the same 2D observation point as before, since the two rotations cancel themselves out. The calibration is thus equivalent to the original calibration, since with the modified camera pose, the modified observation directions still point in the same (absolute) directions. However, the direction values in the calibration were changed (and this would give you a different undistorted image, unless you deterministically compute a unique orientation for the undistorted image).

puzzlepaint avatar Jul 23 '21 16:07 puzzlepaint