Yuan Liu
Yuan Liu
Hi, I manually rule out symmetric objects which may result in ambiguity in training. Symmetric object names are list in https://github.com/liuyuan-pal/Gen6D/blob/main/assets/gso_sym.txt
Hi, I directly use the renderings from IBRNet https://drive.google.com/drive/folders/1qfcPffMy8-rmZjbapLAtdrKwg3AV-NJe?usp=sharing I'm sorry that I also don't know which model the renderings correspond to.
Hi, 1. I think Gen6D is able to work on the given reference object. The textures of the reference images are used in the camera poses tracking of COLMAP (to...
Hi, I have processed the meta information for you at https://drive.google.com/file/d/101IFEjrk_c7xHoCS08vU0Sexfr2V2XYL/view?usp=sharing The initialization is OK, looks like this:  However, when you flip the object or occlude the object, the...
BTW, in this `lego` object, you do not need to transpose the input video so you do NOT need the `--transpose` when using `predict.py`.
Hi, 1. The video captured by me does not transpose correctly if I directly read it with python-opencv. The `transpose` is just to ensure the image is oriented normally (not...
Hi, thanks! `IBRNetWithNeuRay` is exactly the same as IBRNet but with additional visibility terms. However, `NeuralRayGenRenderer` without using visibility will be slightly worse than the original IBRNet because the image...
Hi, the image encoder is actually a CNN (not an MLP) that is in charge of extracting image features for feature aggregation (matching). In this case, using a larger CNN...
You may use the depth map here https://github.com/liuyuan-pal/NeuRay/blob/ecc10276a96c90d65c639a4558c1e6d95874a915/render.py#L61 and then compute the point cloud from the rendered depth map.
Thanks for your advice! I've updated codes to finetune on custom scenes, the commands can be found at https://github.com/liuyuan-pal/NeuRay/blob/main/custom_rendering.md#finetuning-on-a-custom-scene