linemod_dataset
linemod_dataset copied to clipboard
how to generate ground truth files?
#creator of this dataset has used kinect camera to capture the photos and depth information which I can understand. But I am wondering how he has generated/obtained following things:
- How he has generated rotational and translation matrices for image and each object?
- How he has generated mesh.ply file for each object? which method he might have used?
- If you see #Linemod_preprocessed dataset he has provided: a) gt.yml file containing R and T matrices and additionally Bounding Box co-ordinates. I am again wondering how he has generated this information which he has used in dense fusion algorithm b) Mask folder containing mask.png for perticular object in each single image. Again if he has taken images from kinect camera then how he has generated mask for each image.
#creator of this dataset has used kinect camera to capture the photos and depth information which I can understand. But I am wondering how he has generated/obtained following things:
- How he has generated rotational and translation matrices for image and each object?
- How he has generated mesh.ply file for each object? which method he might have used?
- If you see #Linemod_preprocessed dataset he has provided: a) gt.yml file containing R and T matrices and additionally Bounding Box co-ordinates. I am again wondering how he has generated this information which he has used in dense fusion algorithm b) Mask folder containing mask.png for perticular object in each single image. Again if he has taken images from kinect camera then how he has generated mask for each image.
Do you know the answer? I'm also wondering how he has generated rotational and translation matrices for image and each object
- bundle adjust the scene objects and then use the markers and/ or the kinect for camera tracking
- likely some variant of kinect fusion
- you can derive these from the mesh.ply. see e.g.
tobrachmann.py, which is generating mask.png files