neuralangelo
neuralangelo copied to clipboard
Mesh bad even though training shows good results
I have trained on DTU65 with good results. I am now training on custom data - a scan of a Logitech mouse - with latest commit. The training looks gut after 500k steps, with nice normals and rendering:
Nevertheless, when I mesh it according to instructions, the output is completely wrong, as you can see here:
The bounding sphere could be optimized, but as training appears successful, there seems to be something wrong with the meshing.
What could be the problem here? Is there a setting in mesh generation that need to be adjusted? I have seen that others had issue with meshing from particular checkpoint. Here my code:
# mouse-2
EXPERIMENT=custom/mouse-2
CHECKPOINT=logs/neuralangelo/mouse-2/epoch_01683_iteration_000500000_checkpoint.pt
OUTPUT_MESH=mouse-2_mesh.ply
RESOLUTION=2048
BLOCK_RES=128
# generate mesh
torchrun projects/neuralangelo/scripts/extract_mesh.py \
--config=${CONFIG} \
--checkpoint=${CHECKPOINT} \
--output_file=${OUTPUT_MESH} \
--resolution=${RESOLUTION} \
--block_res=${BLOCK_RES} \
--textured
Any help would be appreciated.
The estimated camera pose during the colmap stage for things with no surface features is inaccurate, so the generated image is very poor
The estimated camera pose during the colmap stage for things with no surface features is inaccurate, so the generated image is very poor
But the poses look actually quite accurate as they move very consistently. Also the rendered image for evaluation looks very close to the target. Or, which generated image do you mean is very poor?
I met the same problem. I imported my colmap data into the blender and the bounding sphere looks tight, but the generated mesh is not even close to my data. The mesh was much better before they update the texture fix commit.
Hi @dolcelee
Can you:
- send me your config file and checkpoint?
- point me to which two commits you compared?
Same problem. Used default settings, manually adjusted the bounding box in Blender (though it had free space here and there - if I would make it smaller I would cut out important pieces of point cloud). The val/vis/rgb_render on W&B looked decent, I wasn't waiting for full 500k iters and exported mesh somewhere at 350k. It looked like a huge box full of colourful blobs (the initial video was a street with road and houses).
@mli0603
- https://drive.google.com/file/d/1pafOO_sG9Fw-vNjFIxALPRntiULc0H8O/view?usp=sharing
- I uploaded my data and generated mesh
- This problem shows after the commit c91af8d5098c858df8e8dfa35fba8666d314782b
@mli0603
I have also added my example to be able to reproduce:
- Checkpoint
- Config
- Data
- Mesh
https://1drv.ms/u/s!AtwBlzVMECHC4m1cGAxKqqd_mTKB?e=zVayUR
I have retried and it seems to work now (with commit b772282d26f62064401b1f4f0d53eefe908afdb3). I do not know why, but I did the following things:
- I generated config file with:
python3 projects/neuralangelo/scripts/generate_config.py --sequence_name mouse-2 --data_dir $DATA_PATH --scene_type object
, instead of usingcustom/template.yaml
. - I ensured that the bounding sphere is fitting tight around the object and updated the center point and scaling of sphere in the generated config file. After retraining, I could generate a decent mesh using
extract_mesh.py
I have written a Notebook that guides you through the process of preparing custom made dataset: Notebook
@deeepwin Hi! If it works, could you please show the quality of exported mesh?
@iam-machine Yes, sure, here my example, textured and without:
Hi @deeepwin @dolcelee
Thank you for sharing the useful info! We are looking into this issue. Will update!
Hi @dolcelee
On my end, even before the commit you provided, I am getting the same mesh results. I want to confirm that you are getting two different meshes with the same checkpoint. Is this the case?
@mli0603
I retrained after the commit c91af8d5098c858df8e8dfa35fba8666d314782b since it need to be retrained. Therefore, it's not the same checkpoint which could generates good mesh. But the data and colmap processed files was same.
@mli0603
I followed deepwin's method and did the whole process all over again, start at the colmap part. I don't know why but I do get a decent mesh! the shape of the mesh is pretty awesome, but the color is weird especially the skin part. Can you help me to improve the performance?
@mli0603
I followed deepwin's method and did the whole process all over again, start at the colmap part. I don't know why but I do get a decent mesh! the shape of the mesh is pretty awesome, but the color is weird especially the skin part. Can you help me to improve the performance?
I also tested a custom model and just produced the third result. Mesh looks okay, but the color also clearly feels that the saturation is much higher than the original image.
original image
Thanks for the update.
- For the mesh problem: we have recently updated the preprocessing formats. To fix this issue, 1) run the entire preprocessing script again, 2) just run steps 3 and 4 of the preprocessing script.
- For the color problem: Let me see if I can reproduce it on my end. @dolcelee I assume you are using the same config as you shared with me?
2. the bounding sphere is fitting tight around the object and updated the cen
@deeepwin thanks for nice notebook. Can you tell me how are you calculating camera poses for your video?
@Choco83 there are several ways. With the mouse-2 example I used Kiri, see NerfStudio description here. In one instance of data, I already had the poses in COLMAP format (images.txt) from the image sensor, see here.
@mli0603 I followed deepwin's method and did the whole process all over again, start at the colmap part. I don't know why but I do get a decent mesh! the shape of the mesh is pretty awesome, but the color is weird especially the skin part. Can you help me to improve the performance?
I also tested a custom model and just produced the third result. Mesh looks okay, but the color also clearly feels that the saturation is much higher than the original image.
original image
Awesome result! Could you please tell me how your video was shot and whether the DATA preprocessing followed the DATA PROCESSING.md document ?