adaPool icon indicating copy to clipboard operation
adaPool copied to clipboard

反向传播的报错

Open NyquistBodeTu opened this issue 10 months ago • 1 comments

File "/home/user/anaconda3/envs/torch_py38/lib/python3.8/site-packages/adaPool-0.1-py3.8-linux-x86_64.egg/adaPool/idea.py", line 206, in backward adapool_cuda.backward_1d_em(*saved) RuntimeError: input.is_contiguous()INTERNAL ASSERT FAILED at "CUDA/adapool_cuda.cpp":348, please report a bug to PyTorch. input must be a contiguous tensor

NyquistBodeTu avatar Apr 22 '24 13:04 NyquistBodeTu

The operations you do after the block result in the gradients being stored in non-contiguous memory locations. You can try to avoid that by using .contiguous() in the forward/backward passes before/after the layer call. Using torch.autograd.detect_anomaly will also help you debug the error further.

alexandrosstergiou avatar Apr 22 '24 15:04 alexandrosstergiou