depthai-core
depthai-core copied to clipboard
Can't rectify wide-angle rgb image with mesh data generated by calibration tool.
I'm currently working a project which requires designing a custom wide-angle stereo camera using 2 RGB camera sensors and 1 FFC board.
With this setup, I followed the instructions https://docs.luxonis.com/en/latest/pages/calibration/ and calibrated the stereo pair at 800P successfully.
I also modified the mesh data file (left.calib, right.calib) generated by the calibration tool since the camera roslution I'm using at runtime is 400P. The modified mesh data size is 1066 * 2 = (400 / 16 + 1) * (640 / 16 + 1) * 2
.
With these 2 modified mesh files, I can generate dpeth data correctly with the snippet below:
static std::shared_ptr<dai::node::StereoDepth> createStereoDepthNode(
dai::Pipeline pipeline) {
auto depth = pipeline.create<dai::node::StereoDepth>();
depth->setInputResolution(400, 640);
depth->loadMeshFiles("left_mesh.calib", "right_mesh.calib");
return depth;
}
And this proves the mesh data works with StereoDepth
node.
Since my left and right cameras are actually RGB camera, I expect I should be able to rectify the color images as well using the same mesh data file with ImageManip
node.
ImageManip
node does provide an API setWarpMesh(mesh_data, mesh_width, mesh_height)
. However, the result I got was completely wrong with really bad artifacts.
The following are my implementation snippet for RGB image rectification and result I got:
static std::shared_ptr<dai::node::ImageManip> createRgbRectificationNode(
dai::Pipeline pipeline, dai::CameraBoardSocket socket, int image_width,
int image_height) {
auto rectifyRgb = pipeline.create<dai::node::ImageManip>();
std::string mesh_file;
if (socket == dai::CameraBoardSocket::LEFT) {
mesh_file = "left_mesh.calib";
} else if (socket == dai::CameraBoardSocket::RIGHT) {
mesh_file = "right_mesh.calib";
}
std::vector<std::pair<float, float>> mesh_data;
std::ifstream fin(mesh_file, std::ios::binary);
std::pair<float, float> mesh_point;
while (fin.read(reinterpret_cast<char*>(&mesh_point),
sizeof(std::pair<float, float>))) {
// calibration tool wrote mesh data in reseved order (y, x) already, however
// setWarpMesh expected the data in the normal order (x, y).
mesh_data.push_back(std::make_pair(mesh_point.second, mesh_point.first));
}
uint step_size = 16;
rectifyRgb->initialConfig.setFrameType(dai::RawImgFrame::Type::YUV420p);
rectifyRgb->setMaxOutputFrameSize(image_height * image_width * 3);
// 400 / 16 + 1 = 26, 640 / 16 + 1 = 41, 26 * 41 = 1066
rectifyRgb->setWarpMesh(mesh_data, image_width / step_size + 1,
image_height / step_size + 1);
return rectifyRgb;
}
Left: rectified RGB image with artifacts.
Right: rectified image from stereo.rectifiedLeft/Right port.
Could anyone help me explain why the image artifacts get generated and what's the solution to fix this issue? Thanks!
Hi @developer-mayuan , Have you seen this example? Would it be helpful in your case? Thanks, Erik
To second this, i also experienced the same banding issue, vertically. It is really strange, only happens for color camera input. If i convert it to mono using ImageManip first, it works fine.
Another workaround is to increase step_size, say from 16 to 20 or 40. @developer-mayuan could you try if increasing it would help for now?
To second this, i also experienced the same banding issue, vertically. It is really strange, only happens for color camera input. If i convert it to mono using ImageManip first, it works fine.
Another workaround is to increase step_size, say from 16 to 20 or 40. @developer-mayuan could you try if increasing it would help for now?
@chengguizi Thanks for following up this thread! Yes, I also found the workaround that creating a down-sampled mesh with step_size to 40 works. I think there is some hardware limitation in ImageManip node that it can't handle certain level of dense mesh data file.
@developer-mayuan Good to hear that trick worked!
@Erol444 Could you help feedback this problem to the Luxonis team, on this particular mesh step size and band artifacts?
@Erol444 Could help follow up on this bug? Should be able to reproduce at Luxonis side.
@szabi-luxonis could you check this out?
This issue was encountered on camera_node
branch.
We assume this is the limitation of the caching mechanism that caches the data upfront in CMX, such that warp can run faster.
Bumping the step size to 64 from 16 solved the issue. Applicable to ImageManip node.
@szabi-luxonis Thanks for checking! Yes step size increase seems always the workaround. But this is not elegant in two ways:
- There is no warning when exactly this issue will occur, at which step size. It seems to largely depends on the exact pipeline configuration and resolution etc.
- Increasing the step size will decrease the resolution / precision of the mesh, from my understanding. This will make undistortion less precise at the edge where more
warping
is present
Does the same issue applicable to Warp Node?
@chengguizi
No, Warp node should not suffer from this limitation (but it does have other limitations)
@themarpe @szabi-luxonis Ok! I will give Warp node a try then, and report back.
So anything can be done to rescue the ImageManip setWarpMesh
? The limitation makes it quite fuzzy to use :x I never know what is the good number for step size.
@chengguizi its a tradeoff at the moment with some additional benefits & performance that this limitation brings.
We might look into it more down the road, though for now this will likely remain