there were some errors when running 'run_nerf. py'
There were some errors while running 'run_nerf. py', I saw ' [extract_mesh()] query_pts:torch.Size([551368, 3]), valid:196363 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(7245, 3), F:(14526, 3) [train()] train progress 600/1001 [train()] train progress 700/1001 [train()] train progress 800/1001 [train()] train progress 900/1001 [train()] train progress 1000/1001 Saved checkpoints at /mnt/9a72c439-d0a7-45e8-8d20-d7a235d02763/DATASET/YCB_Video/bowen_addon/ref_views_16/ob_0000020/nerf/model_latest.pth [train_loop()] Iter: 1000, valid_samples: 524032/524288, valid_rays: 2047/2048, loss: 0.9520035, rgb_loss: 0.0936889, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.2549306, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 0.5985778, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0019117,
[extract_mesh()] query_pts:torch.Size([551368, 3]), valid:196363 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(7347, 3), F:(14722, 3) [extract_mesh()] query_pts:torch.Size([551368, 3]), valid:196363 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(7347, 3), F:(14722, 3) [mesh_texture_from_train_images()] Texture: Texture map computation project train_images 0/16 project train_images 1/16 project train_images 2/16 project train_images 3/16 project train_images 4/16 project train_images 5/16 project train_images 6/16 project train_images 7/16 project train_images 8/16 project train_images 9/16 project train_images 10/16 project train_images 11/16 project train_images 12/16 project train_images 13/16 project train_images 14/16 project train_images 15/16 [[[67.6 21. 28.8] [ nan nan nan] [ nan nan nan] ... [ nan nan nan] [ nan nan nan] [ nan nan nan]]
[[ nan nan nan] [ nan nan nan] [ nan nan nan] ... [ nan nan nan] [ nan nan nan] [ nan nan nan]]
[[ nan nan nan] [ nan nan nan] [ nan nan nan] ... [ nan nan nan] [ nan nan nan] [ nan nan nan]]
...
[[ nan nan nan] [ nan nan nan] [ nan nan nan] ... [ nan nan nan] [ nan nan nan] [ nan nan nan]]
[[ nan nan nan] [ nan nan nan] [ nan nan nan] ... [ nan nan nan] [ nan nan nan] [ nan nan nan]]
[[ nan nan nan]
[ nan nan nan]
[ nan nan nan]
...
[ nan nan nan]
[ nan nan nan]
[ nan nan nan]]]
/data_1T/FoundationPose/bundlesdf/nerf_runner.py:1228: RuntimeWarning: invalid value encountered in cast
tex_image = np.clip(tex_image,0,255).astype(np.uint8)
save_dir: /mnt/9a72c439-d0a7-45e8-8d20-d7a235d02763/DATASET/YCB_Video/bowen_addon/ref_views_16/ob_0000021/nerf
/data_1T/FoundationPose/bundlesdf/run_nerf.py:62: DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning disappear) use import imageio.v2 as imageio or call imageio.v2.imread directly.
rgb = imageio.imread(color_file)
[compute_scene_bounds()] compute_scene_bounds_worker start
[compute_scene_bounds()] compute_scene_bounds_worker done
[compute_scene_bounds()] merge pcd
[compute_scene_bounds()] compute_translation_scales done
translation_cvcam=[ 0.00385102 -0.00508065 -0.00442552], sc_factor=17.767770784205347
[build_octree()] Octree voxel dilate_radius:1
[init()] level:0, vox_pts:torch.Size([1, 3]), corner_pts:torch.Size([8, 3])
[init()] level:1, vox_pts:torch.Size([8, 3]), corner_pts:torch.Size([27, 3])
[init()] level:2, vox_pts:torch.Size([64, 3]), corner_pts:torch.Size([125, 3])
[draw()] level:2
[draw()] level:2
level 0, resolution: 32
level 1, resolution: 39
level 2, resolution: 47
level 3, resolution: 56
level 4, resolution: 68
level 5, resolution: 81
level 6, resolution: 98
level 7, resolution: 117
level 8, resolution: 141
level 9, resolution: 169
level 10, resolution: 204
level 11, resolution: 245
level 12, resolution: 295
level 13, resolution: 354
level 14, resolution: 426
level 15, resolution: 512
GridEncoder: input_dim=3 n_levels=16 level_dim=2 resolution=32 -> 512 per_level_scale=1.2030 params=(36112368, 2) gridtype=hash align_corners=False
sc_factor 17.767770784205347
translation [ 0.00385102 -0.00508065 -0.00442552]
[init()] denoise cloud
[init()] Denoising rays based on octree cloud
[init()] bad_mask#=0
rays torch.Size([435879, 12])
[train()] train progress 0/1001
[train_loop()] Iter: 0, valid_samples: 524242/524288, valid_rays: 2048/2048, loss: 39.7150269, rgb_loss: 5.8957958, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 24.8109512, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 8.8068542, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0995187,
[train()] train progress 100/1001 [train()] train progress 200/1001 [train()] train progress 300/1001 [train()] train progress 400/1001 [train()] train progress 500/1001 Saved checkpoints at /mnt/9a72c439-d0a7-45e8-8d20-d7a235d02763/DATASET/YCB_Video/bowen_addon/ref_views_16/ob_0000021/nerf/model_latest.pth [train_loop()] Iter: 500, valid_samples: 521683/524288, valid_rays: 2038/2048, loss: 2.1410861, rgb_loss: 0.2701510, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.5678607, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 1.2915561, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0086144,
[extract_mesh()] query_pts:torch.Size([54872, 3]), valid:54872 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(3988, 3), F:(7968, 3) [train()] train progress 600/1001 [train()] train progress 700/1001 [train()] train progress 800/1001 [train()] train progress 900/1001 [train()] train progress 1000/1001 Saved checkpoints at /mnt/9a72c439-d0a7-45e8-8d20-d7a235d02763/DATASET/YCB_Video/bowen_addon/ref_views_16/ob_0000021/nerf/model_latest.pth [train_loop()] Iter: 1000, valid_samples: 522224/524288, valid_rays: 2040/2048, loss: 1.9105502, rgb_loss: 0.1442035, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.5000084, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 1.2592565, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0050660,
[extract_mesh()] query_pts:torch.Size([54872, 3]), valid:54872 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(4032, 3), F:(8056, 3) [extract_mesh()] query_pts:torch.Size([54872, 3]), valid:54872 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(4032, 3), F:(8056, 3) [mesh_texture_from_train_images()] Texture: Texture map computation project train_images 0/16 project train_images 1/16 project train_images 2/16 project train_images 3/16 project train_images 4/16 project train_images 5/16 project train_images 6/16 project train_images 7/16 project train_images 8/16 project train_images 9/16 project train_images 10/16 project train_images 11/16 project train_images 12/16 project train_images 13/16 project train_images 14/16 project train_images 15/16 [[[nan nan nan] [nan nan nan] [nan nan nan] ... [nan nan nan] [nan nan nan] [nan nan nan]]
[[nan nan nan] [nan nan nan] [nan nan nan] ... [nan nan nan] [nan nan nan] [nan nan nan]]
[[nan nan nan] [nan nan nan] [nan nan nan] ... [nan nan nan] [nan nan nan] [nan nan nan]]
...
[[nan nan nan] [nan nan nan] [nan nan nan] ... [nan nan nan] [nan nan nan] [nan nan nan]]
[[nan nan nan] [nan nan nan] [nan nan nan] ... [nan nan nan] [nan nan nan] [nan nan nan]]
[[nan nan nan] [nan nan nan] [nan nan nan] ... [nan nan nan] [nan nan nan] [nan nan nan]]] /data_1T/FoundationPose/bundlesdf/nerf_runner.py:1228: RuntimeWarning: invalid value encountered in cast tex_image = np.clip(tex_image,0,255).astype(np.uint8) free(): invalid pointer Aborted (core dumped) ', these are some of the final outputs when an error occurs. Hope to receive your answer! thanks!
Hello,
I got similar issue, always ocurring after processing the last object. This is the end of the script output during the failure:
[extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(3976, 3), F:(7944, 3) [train()] train progress 600/1001 [train()] train progress 700/1001 [train()] train progress 800/1001 [train()] train progress 900/1001 [train()] train progress 1000/1001 Saved checkpoints at /home/FoundationPose/datasets/model_free_pretrained/ycbv/ref_views_16/ob_0000021/nerf/model_latest.pth [train_loop()] Iter: 1000, valid_samples: 522224/524288, valid_rays: 2040/2048, loss: 1.9035075, rgb_loss: 0.1436940, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.4987548, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 1.2539434, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0051176,
[extract_mesh()] query_pts:torch.Size([54872, 3]), valid:54872 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(4052, 3), F:(8096, 3) [extract_mesh()] query_pts:torch.Size([54872, 3]), valid:54872 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(4052, 3), F:(8096, 3) [mesh_texture_from_train_images()] Texture: Texture map computation project train_images 0/16 project train_images 1/16 project train_images 2/16 project train_images 3/16 project train_images 4/16 project train_images 5/16 project train_images 6/16 project train_images 7/16 project train_images 8/16 project train_images 9/16 project train_images 10/16 project train_images 11/16 project train_images 12/16 project train_images 13/16 project train_images 14/16 project train_images 15/16 /home/FoundationPose/bundlesdf/nerf_runner.py:1227: RuntimeWarning: invalid value encountered in cast tex_image = np.clip(tex_image,0,255).astype(np.uint8)
Could we get support on that? Is something wrong on this script execution or it is already finished properly despite the warning output at the end?
Thank you very much for your support.
Hello, I have got the same problem:
/home/cqy/python/FoundationPose/bundlesdf/run_nerf.py:61: DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning disappear) use import imageio.v2 as imageioor callimageio.v2.imread` directly.
rgb = imageio.imread(color_file)
[compute_scene_bounds()] compute_scene_bounds_worker start
[compute_scene_bounds()] compute_scene_bounds_worker done
[compute_scene_bounds()] merge pcd
[compute_scene_bounds()] compute_translation_scales done
translation_cvcam=[ 0.00385102 -0.00508065 -0.00442552], sc_factor=17.767770784205347
[build_octree()] Octree voxel dilate_radius:1
[init()] level:0, vox_pts:torch.Size([1, 3]), corner_pts:torch.Size([8, 3])
[init()] level:1, vox_pts:torch.Size([8, 3]), corner_pts:torch.Size([27, 3])
[init()] level:2, vox_pts:torch.Size([64, 3]), corner_pts:torch.Size([125, 3])
[draw()] level:2
[draw()] level:2
level 0, resolution: 32
level 1, resolution: 39
level 2, resolution: 47
level 3, resolution: 56
level 4, resolution: 68
level 5, resolution: 81
level 6, resolution: 98
level 7, resolution: 117
level 8, resolution: 141
level 9, resolution: 169
level 10, resolution: 204
level 11, resolution: 245
level 12, resolution: 295
level 13, resolution: 354
level 14, resolution: 426
level 15, resolution: 512
GridEncoder: input_dim=3 n_levels=16 level_dim=2 resolution=32 -> 512 per_level_scale=1.2030 params=(36112368, 2) gridtype=hash align_corners=False
sc_factor 17.767770784205347
translation [ 0.00385102 -0.00508065 -0.00442552]
[init()] denoise cloud
[init()] Denoising rays based on octree cloud
[init()] bad_mask#=0
rays torch.Size([435879, 12])
[train()] train progress 0/1001
[train_loop()] Iter: 0, valid_samples: 524242/524288, valid_rays: 2048/2048, loss: 39.7148476, rgb_loss: 5.8957934, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 24.8109512, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 8.8066797, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0995187,
[train()] train progress 100/1001 [train()] train progress 200/1001 [train()] train progress 300/1001 [train()] train progress 400/1001 [train()] train progress 500/1001 Saved checkpoints at demo_data/ycb_video/ref_views_16/ob_0000021/nerf/model_latest.pth [train_loop()] Iter: 500, valid_samples: 521681/524288, valid_rays: 2038/2048, loss: 2.1199493, rgb_loss: 0.2537031, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.5657594, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 1.2895830, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0079874,
[extract_mesh()] query_pts:torch.Size([54872, 3]), valid:54872 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(3960, 3), F:(7908, 3) [train()] train progress 600/1001 [train()] train progress 700/1001 [train()] train progress 800/1001 [train()] train progress 900/1001 [train()] train progress 1000/1001 Saved checkpoints at demo_data/ycb_video/ref_views_16/ob_0000021/nerf/model_latest.pth [train_loop()] Iter: 1000, valid_samples: 522479/524288, valid_rays: 2041/2048, loss: 1.9102350, rgb_loss: 0.1458945, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.4992477, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 1.2583539, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.0047271,
[extract_mesh()] query_pts:torch.Size([54872, 3]), valid:54872 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(3992, 3), F:(7976, 3) [extract_mesh()] query_pts:torch.Size([54872, 3]), valid:54872 [extract_mesh()] Running Marching Cubes [extract_mesh()] done V:(3992, 3), F:(7976, 3) [mesh_texture_from_train_images()] Texture: Texture map computation project train_images 0/16 project train_images 1/16 project train_images 2/16 project train_images 3/16 project train_images 4/16 project train_images 5/16 project train_images 6/16 project train_images 7/16 project train_images 8/16 project train_images 9/16 project train_images 10/16 project train_images 11/16 project train_images 12/16 project train_images 13/16 project train_images 14/16 project train_images 15/16 /home/cqy/python/FoundationPose/bundlesdf/nerf_runner.py:1227: RuntimeWarning: invalid value encountered in cast tex_image = np.clip(tex_image,0,255).astype(np.uint8) free(): invalid pointer Aborted (core dumped)` So how can I fix this error?
@AACMO @qian26 Hi, I figured this, it's fine, the result is saved. #63