Kanza

Results 39 comments of Kanza

Thank you for your reply, I have gone through this documentation but still, I am not getting how to fix it. However, the code is here `if method =='lloss': models...

This is the training part `def train(models, method, criterion, optimizers, schedulers, dataloaders, num_epochs, epoch_loss): print('>> Train a Model.') best_acc = 0. for epoch in range(num_epochs): best_loss = torch.tensor([0.5]).cuda() loss =...

Thank you so much for your help. I wrote something like this `def train_epoch(models, method, criterion, optimizers, dataloaders, epoch, epoch_loss): models['backbone'].train() if method == 'lloss': models['module'].train() global iters for data...

` criterion(models['backbone'](inputs)[0], labels).backward() File "/home/kanza/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/kanza/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/autograd/__init__.py", line 166, in backward grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False) File "/home/kanza/anaconda3/envs/optuna/lib/python3.8/site-packages/torch/autograd/__init__.py", line 67, in...

No, Actually this repo is using multiple methods such as Random or 'lloss' I have removed the module of that method for sake of simplicity, now can you suggest me...

Thank you so much for your help and recommendations. I cannot thank you enough. I fixed the error by adding loss.backward() ` # -----------------SAM Optimizer ------------------- loss.backward() criterion(models['backbone'](inputs)[0], labels) optimizers['backbone'].first_step(zero_grad=True)...

When I call loss.backward() twice it gives me following error RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been...

None of them are working ` # -----------------SAM Optimizer ------------------- criterion(models['backbone'](inputs)[0], labels) loss.backward(retain_graph=True) optimizers['backbone'].first_step(zero_grad=True) criterion(models['backbone'](inputs)[0], labels) loss.backward(retain_graph=True) optimizers['backbone'].second_step(zero_grad=True) # -----------------SAM Optimizer for LLOSS Method ------------------- if method == 'lloss': #optimizers['module'].step()...

As per the sample given here https://github.com/davda54/sam loss.backward() is not required for secont_step. the first one working fine for me but not working for my LLOSS method

Okay right, but there are some errors using backwar() the second time. I don't know how to resolve it. raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError:...