tiny-cuda-nn icon indicating copy to clipboard operation
tiny-cuda-nn copied to clipboard

backward_backward_input_impl: not implemented error

Open WBS111 opened this issue 2 years ago • 1 comments

When I use the tiny-cuda-nn to train a network, I need to take the derivative of one of the intermediate outputs of the network. When I backward the loss function loss.backward(), it reports an error: RuntimeError: DifferentiableObject::backward_backward_input_impl: not implemented error

The code is as follows:

class Network(torch.nn.Module):
  def __init__(self, num_dim=3, base_resolution=16, max_resolution=4096, feat_dim=15):
    self.mlp_base = tcnn.NetworkWithInputEncoding(
                n_input_dims=num_dim,
                n_output_dims=1,
                encoding_config={
                    "otype": "HashGrid",
                    "n_levels": n_levels,
                    "n_features_per_level": 2,
                    "log2_hashmap_size": log2_hashmap_size,
                    "base_resolution": base_resolution,
                    "per_level_scale": per_level_scale,
                },
                network_config={
                    "otype": "FullyFusedMLP",
                    "activation": "ReLU",
                    "output_activation": "None",
                    "n_neurons": 64,
                    "n_hidden_layers": 1,
                },
            )
  def forward(self, input_xyz, input_dir=None, input_sun_dir=None):
        with torch.enable_grad():
            input_xyz.requires_grad_(True)
            sigma = self.mlp_base(input_xyz, return_feat=True)
        
            # compute norm
            normal_map = torch.autograd.grad(outputs=sigma, inputs=input_xyz, grad_outputs=torch.ones_like(sigma, requires_grad=False),
                                             retain_graph=True, create_graph=True)[0]
        
        ......

In the forward, I use normal_map = torch.autograd.grad(outputs=sigma, inputs=input_xyz, grad_outputs=torch.ones_like(sigma, requires_grad=False), retain_graph=True, create_graph=True)[0] When I backward the loss function loss.backward(), it reports an error:

Traceback (most recent call last):
  File "/home/user/Desktop/Nerf_Project/RS_Nerf/train_rsnerf_ngp.py", line 154, in <module>
    loss.backward()
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/autograd/__init__.py", line 197, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/autograd/function.py", line 267, in apply
    return user_fn(self, *args)
  File "/home/user/anaconda3/lib/python3.9/site-packages/tinycudann/modules.py", line 136, in backward
    doutput_grad, params_grad, input_grad = ctx.ctx_fwd.native_tcnn_module.bwd_bwd_input(
RuntimeError: DifferentiableObject::backward_backward_input_impl: not implemented error

I then replace the self.mlp_base with a simple network as below:

self.mlp_base = torch.nn.Sequential(nn.Linear(num_dim, 64), nn.ReLU(), nn.Linear(64, 64), nn.ReLU(), nn.Linear(64, 1 + self.feat_dim))

The code no longer reports errors as above. How do I fix this?

WBS111 avatar Aug 28 '23 01:08 WBS111

The same reason for https://github.com/NVlabs/tiny-cuda-nn/issues/89

ljjTYJR avatar Aug 28 '23 18:08 ljjTYJR