EasyMocap
EasyMocap copied to clipboard
Some calibration-tips:
Since calibration is extremely important for a successful capture-session I'd like to contribute some experiences and thoughts. I made a lot of mistakes in the past which can be avoided by following some rules of thumb:
- Sensor-type:
-
Should be ideally a global shutter sensor where the whole picture (every pixel) is captured at the same time, rolling shutter can lead to "smear" images (but if the frame-rate is high enough rolling shutter sensors are equal in quality)
-
Although mobile-cameras are easily available it can be hard to get the camera-sensor-type/name (in case of apple-mobile-devices sometimes impossible). That's important because you should keep in mind that without a knowledge about the specific sensor-size and pysical-pixel-size you can't really get world units back (opencv camera-matrix expresses focal length in pixels (https://answers.opencv.org/question/139166/focal-length-from-calibration-parameters/ here's a good read)
- Lenses / Focus:
-
It's also quite important to keep the focal length the same over the whole record session otherwise the camera-matrix must be adjusted dynamically. If you want to re-use your intrinsics-file your focal-length must never change (this might be problematic for mobile-cams where auto-focus is almost always set to default on most apps) For apple devices though I've found "ProCam8" the only app where you can numerically adjust a manual focus (and therefore re-use these settings)
-
Another thing to keep in mind is that actually every mobile device is pre-undistorting the recorded raw images which is on one hand nice because you can then set the distortion-parameters to 0 (and you should do so, otherwise you will undistort an already undistorted image which will lead to strange artefacts along the boundaries) but on the other hand without a proper knowledge about these non-linear coeffs a re-projection is impossible (correct me here if i'm wrong)
-
So in general i'd recommend either a fixed-focus-lens or a manual focus-lens <- must be locked after calibration (both ideally with <1% distortion) or if that's not possible you should do the undistortion and rectifying yourself (the calib-tools here are doin this also for you)
- Calibrationpatterns/pattern-size:
-
On this you can actually write a whole book (although most tutorials I've found never really dig into this). You should always calibrate for the distance of your desired record-object. The board should always have a slide angle (never completely co-planar with the cam but also never too much angled -> +/- 30-40 degrees are always working for me). The larger the distance the bigger the pattern-image should be (ideally the pattern-image should cover the whole image, but this is very often impossible. Therefore shoot enough images (about 10-20 especially at the edges of the image if you are facing distortions due to your lens -> otherwise your dist-coeffs will lead to bad undistortion).
-
Try to get the best chestboard-image-quality you can get! It should be non-reflective, high-contrast, as planar as possible and with a very good print-quality
-
One last thought here is to keep in mind that there should either be an automatism or an heuristic check about the quality by evaluating the reprojection-error of every image that is supposed to get later feeded into the calibration-algorithm. If that's not the case even one single outlier (bad calibration-image) can spoil the overall calibration. I'm used to use matlab + calibration-toolbox which gives you the possibility to check for the rms-error of every single image (you can also use the matlab camera-matrix with opencv but then don't forget to transpose the matrix since Matlab-matrices are column-major and opencv is row-major)
-
For most of my purposes a "good" overall-rms-error is between 0.1 and 0.3 pixels (depending on the needed precision and hardware-quality that I can use)
Thank you @FrankSpalteholz ! I will add these tips to our tutorials!
Amazing points @FrankSpalteholz I struggled with calibration for months. I got it working for me but most of point you talked about I never noticed.
Thanks a lot... The calibration is by far, the most challenging thing to do to use easymocap, and I think with this information, things tend to get easier.
Me too Carlos. Although I'm more familiar with short-distance-AR or stereo-pass-through-VR--applications (where I'm mostly use a computer screen for calibration (with the pattern-image set to fullscreen) this here is quite different. I'm currently setting up a test-configuration with 4 action-cams because they are kind of cheap but providing 60fps by 4k + auto-un-distortion-functions (some of them). Those cams are having normaly a fixed focus lense (infinity) due to their wide-lens-config. I'm super grateful for this project here so my pleasure. I'd be glad to help so feel free to contact me.
@chingswy let me ask you something. Your calibration tool is using a standart chessboard-pattern. But wouldn't this lead to degenerate configurations when someone tries to use a setup where you place cameras in a circle? All cameras facing each other (by this i mean -> they are rotated by an increment of 180 degrees) will create the same rotation-vectors. I'm currently working on an aruco-board-solution where this can't happen.
Hello @FrankSpalteholz , our cameras in LightStage are placed in a circle and this calibration strategy works well.
This is exactly what i'm wondering of to be honest. I'm not asking here for a paper-long explanation and correct me if I'm wrong but the extrinsics are actually containing a translation vector (position of the upper left corner of the chess-board in camera space) and a rotation-vector (rotation of the chessboard relativ to the cam). I saw that you also are calculating the rotation-matrix from this (but not checking if this matrix is a valid rotation-matrix -> must have a determinant of 1 actually) but anyways. Without having too much efford on your side could you lead me to the code where you are using these vectors for all the cams since i'm still confused why this works. Thank you very much
Edit: and yes your results are stunning! I'd bet that there would be more calculations behind getting the extrinsics for example taking all camera-positions/rotations into account and eliminating those degenerate configs. But all i've found was calib_extri.py
can you recommend some camera types? or which camera you are used to get the corrected camera parameters ?
was successfull with all cameras that I tested. I think the most important part is the intrinsic calibration.
Pay more attention to the intrinsic, record the chessboard in all the corners of the camera.
@FrankSpalteholz Hello I'm newbie in this field. If I test with my own videos, do I need a calibration camera? Or in which case, I need camera calibration?
@huynhloc04 yes, you need to calibrate you cameras, intrinsic and extrinsicly.
@carlosedubarreto Thanks for your answer. So is there any way to test with videos from another source that's not videos taken from my camera or doesn't need camera calibration, such as videos downloaded from youtube?
@huynhloc04 for easymocap you need at least 2 cameras, because of that, you need to calibrate it. I'm saying this for the main method. The github pages talks about other methods, but I didnt teste those other methods. On this, I cant help you. Maybe someone else can.
@carlosedubarreto Oh I see, thank you very much.