MFF-pytorch icon indicating copy to clipboard operation
MFF-pytorch copied to clipboard

Legacy autograd function with non-static forward method is deprecated

Open MarkWijkhuizen opened this issue 3 years ago • 1 comments

When running the main method I encounter the following error. Pytorch 1.5.0 and Torchvision 0.6.0 are installed with CUDA support. Do you have any suggestions on where to find the problem?

RuntimeError                              Traceback (most recent call last)
<ipython-input-7-e2b31c66afa6> in <module>
    170 
    171 if __name__ == '__main__':
--> 172     model = main()

<ipython-input-7-e2b31c66afa6> in main(DEBUG)
    153 
    154         # train for one epoch
--> 155         train(train_loader, model, criterion, optimizer, epoch, log_training)
    156 
    157         # evaluate on validation set

<ipython-input-4-451b537ed2a3> in train(train_loader, model, criterion, optimizer, epoch, log)
     24 
     25         # compute output
---> 26         output = model(input_var)
     27         loss = criterion(output, target_var)
     28 

~\Anaconda3\envs\mff\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

~\Anaconda3\envs\mff\lib\site-packages\torch\nn\parallel\data_parallel.py in forward(self, *inputs, **kwargs)
    151         inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
    152         if len(self.device_ids) == 1:
--> 153             return self.module(*inputs[0], **kwargs[0])
    154         replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
    155         outputs = self.parallel_apply(replicas, inputs, kwargs)

~\Anaconda3\envs\mff\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

D:\MEGA\Nijmegen\Master Stage\notebooks\MFF\models.py in forward(self, input)
    246             base_out = base_out.view((-1, self.num_segments) + base_out.size()[1:])
    247 
--> 248         output = self.consensus(base_out)
    249         return output.squeeze(1)
    250 

~\Anaconda3\envs\mff\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

D:\MEGA\Nijmegen\Master Stage\notebooks\MFF\ops\basic_ops.py in forward(self, input)
     45 
     46     def forward(self, input):
---> 47         return SegmentConsensus(self.consensus_type, self.dim)(input)

~\Anaconda3\envs\mff\lib\site-packages\torch\autograd\function.py in __call__(self, *args, **kwargs)
    142 
    143     def __call__(self, *args, **kwargs):
--> 144         raise RuntimeError(
    145             "Legacy autograd function with non-static forward method is deprecated. "
    146             "Please use new-style autograd function with static forward method. "

RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

MarkWijkhuizen avatar Mar 30 '21 14:03 MarkWijkhuizen

So I figured out how to get the code working! Since the error message was about a deprecated functionality downgrading to a previous Pytorch version seemed a logical step and it worked.

Installing Pytorch V1.4 did the trick, download instructions can be found here

There is however still a bug regarding the evaluation function where every evaluation step a new model is created, resulting in an OOM error. This can be easily fixed by putting the model call inside the with torch.no grad(): scope, thus simply adding a tab to line 237, as shown below.

# line 232-237 in main.py
with torch.no_grad():
    input_var = Variable(input)
    target_var = Variable(target)

    # compute output
    output = model(input_var) # This call should be inside the torch.no_grad() scope!!!

MarkWijkhuizen avatar Apr 16 '21 13:04 MarkWijkhuizen