tiny-cuda-nn icon indicating copy to clipboard operation
tiny-cuda-nn copied to clipboard

Issues using the torch.autograd.grad in the tinycuda environment

Open joejep opened this issue 2 weeks ago • 0 comments

gradients = torch.autograd.grad( outputs=potential.sum(), inputs=flat_points, create_graph=create_graph, retain_graph=True, only_inputs=True, allow_unused=True )[0]

is there a way out. I am forced to use the traditional architecture(using MLP) self.block1 = nn.Sequential( nn.Linear(input_dim, hidden_dim), nn.ReLU(),

    )

instead of using the

network_config = { "otype": "FullyFusedMLP", "activation": "ReLU", "output_activation": "Sigmoid", # Final activation for RGB "n_neurons": hidden_dim, "n_hidden_layers": 8 # Combined layers from all blocks }

    self.network = tcnn.Network(
       n_input_dims=total_input_dim,
        n_output_dims=3,  # RGB output
        network_config=network_config

I get an error using tinycuda when the torch.autograd.grad is trying to create graph but no error using the nn.sequential. I am not okay with inn.sequential because i need to speed up the training. Any help will be appreciated

joejep avatar Nov 30 '25 11:11 joejep