Meshroom icon indicating copy to clipboard operation
Meshroom copied to clipboard

How to use meshroom to calibrate a virtual production rig

Open MiloMindbender opened this issue 1 year ago • 0 comments

For virtual production purposes I have a camera rig which has a tracking camera and a 4k production video camera in a rigid rig. Both cameras output video to a PC. What I would like to do is use meshroom to find the intrinsics for both cameras and determine the offset (position and rotation) from the nodal point of one camera to the other.

If you can help me to understand how to do this, I will produce a video tutorial on the process.

A typical camera rig looks like this, the box at the top of the frame, attached to the camera handle is the tracking camera which has a 1080p video camera in the center of it. Camera Rig Initially, we don't know the position of the nodal point/lens entrance pupil for either camera so the offset can't be measured manually. Also rotation offsets between the two cameras of a degree or so are common, we need to know this rotation offset as well.

Here is a sample of 5 image pairs from the rig, I actually have more but this is all that would fit in the 24mb upload limit on github Sony Rig.zip

Running this rig through meshroom seems to be giving me good data on the positions of both cameras, but I'm not sure if I am doing it right to get accurate intrinsics and offsets between them. I've read the documentation on rigs but am not sure if I need to add some more steps to get accurate results. My questions are:

  • How can I calibrate so the coordinates will be in CM? (there will be a reference object in the images)
  • Do I need to add any information to make this more accurate, sensor size, resolution, focal length...etc, and how/where do I add it.
  • I read the information on the "rig calibration" node but I'm still not sure how to use it or if I need to.
  • Is it reasonable to expect to get offsets accurate to 1mm and rotational offsets accurate to less than a degree with enough input images?

Right now most Virtual Production people use a series of difficult manual steps to get the offsets and intrinsics for their cameras. If I understand it right, Meshroom should be able to get all this information just by processing a set of images without any manual measuring. This would be a huge help to Virtual Production studios and get you many more users if you can help me figure out how to do it.

MiloMindbender avatar Aug 06 '22 00:08 MiloMindbender