The lego mesh result is not well,How to adjust parameters and get better results
I Use ,
| GPU VRAM | Hyperparameter |
|---|---|
| 8GB | dict_size=20, dim=4 。 |
-
Run the command as follows: torchrun --nproc_per_node=1 projects/neuralangelo/scripts/extract_mesh.py
--config=logs/video2/config.yaml
--checkpoint=logs/video2/epoch_00400_iteration_000020000_checkpoint.pt
--output_file=video3.ply
--resolution=2014
--block_res=128
--textured
-
val/vis/normal is :
-
val/vis/inv_depth is :
-
I use this notebook result is : Root Directory Path: /home ============== 1/home/video3d/code/neuralangelo/datasets/lego_ds2/sparse # images: 100 # points: 17228 /home/video3d/yes/envs/neuralangelo/lib/python3.8/site-packages/traittypes/traittypes.py:97: UserWarning:
Given trait value dtype "float64" does not match required type "float32". A coerced copy has been created. Plot(antialias=3, axes=['x', 'y', 'z'], axes_helper=1.0, axes_helper_colors=[16711680, 65280, 255], background_color=16777215, camera_animation=[], camera_fov=60.0, camera_mode='trackball', camera_pan_speed=1.0, camera_rotate_speed=5.0, camera_zoom_speed=3.0, fps=25.0, fps_meter=False, grid=[-1, -1, -1, 1, 1, 1], grid_color=15132390, height=800, label_color=4473924, lighting=1.5, manipulate_mode='translate', minimum_fps=-1.0, mode='view', name='poses', object_ids=[139752039768016, 139752039933216, 139752039933648, 139752039934656, 139752017756800, 139752011575648], screenshot_scale=2.0, snapshot_type='full') Output()
How to get better results。
Hi @yuxuJava789
The 8 GB default parameters will lead to performance degradation. However, you may be able to try different parameters to see what works best over the suggested values. My intuition is that increasing the dim to 8 and lowering dict_size to something like 19 or 18 may work better and require only 8 GB of VRAM.
Let us know how it goes!
@yuxuJava789 if you are training with the default config, this is expected at 20k iterations. You would need to run to 500k iterations to get the final results. If you want some faster experiment turnarounds, please also consider checking out the new Colab notebook.
@mli0603 @chenhsuanlin Hi,I follow your advice run to 500k iterations (120 hours of operation),results is :
2
my config is :
checkpoint: save_epoch: 9999999999 save_iter: 20000 save_latest_iter: 9999999999 save_period: 9999999999 strict_resume: true cudnn: benchmark: true deterministic: false data: name: dummy num_images: null num_workers: 4 preload: true readjust: center: - 0.0 - 0.0 - 0.0 scale: 0.5 root: datasets/lego_ds2 train: batch_size: 2 image_size: - 801 - 801 subset: null type: projects.neuralangelo.data use_multi_epoch_loader: true val: batch_size: 2 image_size: - 300 - 300 max_viz_samples: 16 subset: 4 image_save_iter: 9999999999 inference_args: {} local_rank: 0 logdir: logs/video3 logging_iter: 9999999999999 max_epoch: 9999999999 max_iter: 500000 metrics_epoch: null metrics_iter: null model: appear_embed: dim: 8 enabled: false background: enabled: true encoding: levels: 10 type: fourier encoding_view: levels: 3 type: spherical mlp: activ: relu activ_density: softplus activ_density_params: {} activ_params: {} hidden_dim: 256 hidden_dim_rgb: 128 num_layers: 8 num_layers_rgb: 2 skip: - 4 skip_rgb: [] view_dep: true white: false object: rgb: encoding_view: levels: 3 type: spherical mlp: activ: relu_ activ_params: {} hidden_dim: 256 num_layers: 4 skip: [] weight_norm: true mode: idr s_var: anneal_end: 0.1 init_val: 3.0 sdf: encoding: coarse2fine: enabled: true init_active_level: 4 step: 5000 hashgrid: dict_size: 19 dim: 8 max_logres: 11 min_logres: 5 range: - -2 - 2 levels: 16 type: hashgrid gradient: mode: numerical taps: 4 mlp: activ: softplus activ_params: beta: 100 geometric_init: true hidden_dim: 256 inside_out: false num_layers: 1 out_bias: 0.5 skip: [] weight_norm: true render: num_sample_hierarchy: 4 num_samples: background: 32 coarse: 64 fine: 16 rand_rays: 512 stratified: true type: projects.neuralangelo.model nvtx_profile: false optim: fused_opt: false params: lr: 0.001 weight_decay: 0.01 sched: gamma: 10.0 iteration_mode: true step_size: 9999999999 two_steps: - 300000 - 400000 type: two_steps_with_warmup warm_up_end: 5000 type: AdamW pretrained_weight: null source_filename: projects/neuralangelo/configs/custom/lego.yaml speed_benchmark: false test_data: name: dummy num_workers: 0 test: batch_size: 1 is_lmdb: false roots: null type: imaginaire.datasets.images timeout_period: 9999999 trainer: amp_config: backoff_factor: 0.5 enabled: false growth_factor: 2.0 growth_interval: 2000 init_scale: 65536.0 ddp_config: find_unused_parameters: false static_graph: true depth_vis_scale: 0.5 ema_config: beta: 0.9999 enabled: false load_ema_checkpoint: false start_iteration: 0 grad_accum_iter: 1 image_to_tensorboard: false init: gain: null type: none loss_weight: curvature: 0.0005 eikonal: 0.1 render: 1.0 type: projects.neuralangelo.trainer validation_iter: 5000 wandb_image_iter: 10000 wandb_scalar_iter: 100
How I get higher quality and with color mesh 。
With parameters (RTX 3060 12GB): dict_size=21, dim=4, after 500K iterations,10K epoches,I have got 25 pt files.
When I extract mesh with the last pt file(epoch_10000_iteration_000500000_checkpoint.pt),the final mesh is not what I want, like following:
Did I do somthing wrong? What I can do to improve the quality.