PAC-NeRF icon indicating copy to clipboard operation
PAC-NeRF copied to clipboard

Increase the resolution of the grid

Open fresh-men opened this issue 2 years ago • 2 comments

Hello, I want to try to increase the resolution of the grid to realize the representation of more detailed objects, but I have a problem, how can I solve it? Have you ever tried to improve the grid resolution?

I tried to change pg_scale = [1000, 2000, 4000]to pg_scale= [1000, 2000, 3000,4000] ,and then encountered an error:

Traceback (most recent call last): File "/opt/data/private/PAC-NeRF-main/train.py", line 287, in <module> train_static(cfg, pnerf, optimizer, start, cfg['N_static'], rays_o_all, rays_d_all, viewdirs_all, rgb_all, ray_mask_all) File "/opt/data/private/PAC-NeRF-main/train.py", line 163, in train_static global_loss = pnerf.forward(1, rays_o_all, File "/opt/data/private/PAC-NeRF-main/lib/pac_nerf.py", line 204, in forward self.dynamic_observer.initialize(self.init_particles, self.init_features, self.init_velocities, self.init_rhos, self.init_mu, self.init_lam, self.nerf.voxel_size, self.init_yield_stress, self.init_plastic_viscosity, self.init_friction_alpha, self.cohesion) File "/opt/data/private/PAC-NeRF-main/lib/engine/dynamic_observer.py", line 160, in initialize self.from_torch(particles.data.cpu().numpy(), features.data.cpu().numpy(), velocities.data.cpu().numpy(), particle_rho.data.cpu().numpy(), particle_mu.data.cpu().numpy(), particle_lam.data.cpu().numpy()) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 1002, in __call__ return self._primal(self._kernel_owner, *args, **kwargs) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 869, in __call__ return self.runtime.compiled_functions[key](*args) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 785, in func__ raise e from None File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 782, in func__ t_kernel(launch_ctx) RuntimeError: [cuda_driver.h:operator()@87] CUDA Error CUDA_ERROR_ASSERT: device-side assert triggered while calling stream_synchronize (cuStreamSynchronize) [E 04/07/23 02:37:09.064 434] [cuda_driver.h:operator()@87] CUDA Error CUDA_ERROR_ASSERT: device-side assert triggered while calling stream_synchronize (cuStreamSynchronize)

fresh-men avatar Apr 07 '23 02:04 fresh-men

I'm not sure about this error.

The simulation part may be too slow if you double the resolution.

xuan-li avatar Apr 07 '23 02:04 xuan-li

This problem happens here: self.from_torch(particles.data.cpu().numpy(), features.data.cpu().numpy(), velocities.data.cpu().numpy(), particle_rho.data.cpu().numpy(), particle_mu.data.cpu().numpy(), particle_lam.data.cpu().numpy())

Have you ever tried to improve the grid resolution?

I'm not sure about this error.

The simulation part may be too slow if you double the resolution.

fresh-men avatar Apr 07 '23 02:04 fresh-men