neuralangelo
neuralangelo copied to clipboard
Poor Performance on ScanNet
Thanks for you excellent work! I have tried it on ScanNet datsets, however, I got a totally bad results. This is the reconstruction results after 30k iters:
where the gt mesh is like:
I haved adjusted the dataset format and config files, but I don't know whether the result is due to the method or due to some hyperparameters. The adjusted config file is below:
model:
object:
sdf:
mlp:
inside_out: True
out_bias: 0.6
encoding:
coarse2fine:
init_active_level: 8
appear_embed:
enabled: True
dim: 8
background:
enabled: False
render:
num_samples:
background: 0
Additinally, I normalize ScanNet scene into the cube of [-0.5, 0.5], and I changed the near/far sampling to 0 and 1.5
Do you have any ideas why ScanNet appears so poor performance?
Given what I understand from neuralangelo, more iterations = finer details, the default setting for iterations are 500k in the config, might it be you were not training long enough?
(I will have a comparasion of 50k steps vs. 500k steps in about 1 hr.)
Hi @zParquet, it could very well be possible that it has not been trained long enough yet. Do the results in your visualizer (e.g. W&B) look reasonable? Additionally, I've pushed a small fix on the mesh extraction script (#41), which may be related to your issue. If you could pull again and still see the same issue, please let me know.
50k steps have been a mere blob whereas 500k steps are usable (this was bad input material btw.)
50k steps, hashgrid.dict_size 10
500k steps, hashgrid.dict_size 16
Nice results, thanks @chris-aeviator for sharing! Have you tried using a larger dict_size
value (default was 22)? Would also be great if you could try extracting the mesh with textures (#45)!
Thanks for you excellent work! I have tried it on ScanNet datsets, however, I got a totally bad results. This is the reconstruction results after 30k iters:
where the gt mesh is like:
I haved adjusted the dataset format and config files, but I don't know whether the result is due to the method or due to some hyperparameters. The adjusted config file is below:
model: object: sdf: mlp: inside_out: True out_bias: 0.6 encoding: coarse2fine: init_active_level: 8 appear_embed: enabled: True dim: 8 background: enabled: False render: num_samples: background: 0
Additinally, I normalize ScanNet scene into the cube of [-0.5, 0.5], and I changed the near/far sampling to 0 and 1.5
Do you have any ideas why ScanNet appears so poor performance?
Dear @zParquet,
I'm curious, did you observe improved results after extending the number of training iterations?
Best, Mulin
Is neuralangelo only applicable to 3D reconstruction of a single object from multiple perspectives, rather than 3D reconstruction of internal space? Because I shot a piece of indoor footage, after training, the result is a round object, not an indoor space
@MulinYu Hi, I'm currently running a new experiment with larger training iterations. I will post the result as soon as the experiment completes ;)
chris-aeviator
how to visualize this ply result?
This time I trained scannet scene0050_00 for 500k iters. It seems better than that of 30k iters, but is still bad :(
I plan to rerun the script using a larger dict size and expect it can become better...
@440981 @zParquet have you set the scene type to indoor
in the data processing step? Also, you would need to make sure the bounding sphere encapsulates the entire indoor region.
@chenhsuanlin I transformed the data type manually from raw scannet datset. I rescaled the scene so that the center of the scene lies in the coordinate origin (0,0,0) and the scale of the scene lies in the cube of [-0.5, 0.5]^3. I think this transformation plays the same role as "indoor" processing step. I noticed that neuralangelo initializes the scene as a sphere with radius 1, so I'm convinced that the scene is within the indoor region after transformation.
@zParquet we have pushed an update to main
yesterday that fixed a checkpoint issue which may be related. Could you pull and try running the pipeline again? Please let me know if there are further issues with the latest code.
@zParquet Hello, do you have solved it? I met the same problem, poor performance on Scannet dataset.
This time I trained scannet scene0050_00 for 500k iters. It seems better than that of 30k iters, but is still bad :(
I plan to rerun the script using a larger dict size and expect it can become better...
Have you solved this problem? I'm having the same issue with the indoor dataset as you
I did not get a satisfied result at last😥. I'm afraid that maybe there is some generalization ability limit of neuralangelo in indoor scene datatset. If anyone can obtain better results, please let me know where I made mistakes.