camera_calibration icon indicating copy to clipboard operation
camera_calibration copied to clipboard

Questions about calibrating large FOV fisheye camera

Open painterdrown opened this issue 3 years ago • 1 comments

Hi, respect to your work!

I noticed that you test your method on a fisheye camera that you labeled as Tango in the paper. And it turned out a much better result than traditional OpenCV parametric model's. I've tested your method in my fisheye camera too but I couldn't obtain a expected result. I guess it's due to my wrong way taking the pattern images. Could you provide more details on how to obtain calibration data when using large FOV camera?

  • How many images (or detected features) is enough? I took 100 images but almost half of them fail to be detected.
  • Due to high distortion at the edge of image, it's difficult to obtain image that has pattern feature at the edge. Would it matter?
  • How to check the detected feature is good before calibrating?

Would you please share your Tango test data? It would help a lot.

painterdrown avatar Jun 23 '21 07:06 painterdrown

First, the calibration system was not designed for very strong fisheye lenses. The two issues I am aware of are:

  • The feature detector will at some point fail to detect features if the image distortion is too strong.
  • If significant parts of the image always remain black, then they must be excluded for the point projection algorithm (as discussed in #15).

Regarding your questions:

  • It is not primarily about the number of images (aside from reaching the theoretical minimum, which I don't remember right now, but I guess is about 3). The important part for getting good results is that the coverage of the image area with feature detections is good, relative to the grid size you chose (in case of using the generic camera models).
  • Yes, it is important to cover the whole image area that shall be calibrated with feature detections. The program will automatically constrain the calibrated image area to the axis-aligned bounding box of all feature detections. However, if the actual shape of the area where feature detections exist differs from that (for example, a circular area in case of a strong fisheye camera), then the area that the program will attempt to calibrate will include parts where no feature detections exist. Due to the lack of a regularization term on the camera calibration values, and the dependence of the point projection algorithm on a good calibration in all parts of the calibrated image area, this will cause issues (see #15).
  • I think that using the --show_visualizations flag should display images that enable to see where features were detected.

The Tango data from the paper seemed to have a sufficiently small fisheye distortion such that it still worked. I uploaded the Tango images I used for calibration here. I guess that the calibration pattern in these images was likely the same as for the sample data in #16, but I am not sure anymore.

Alternatively, you may try to use the pre-extracted features in the dataset bin file here (but I am not sure if the dataset bin file format has changed since this one was generated).

puzzlepaint avatar Jun 23 '21 09:06 puzzlepaint