adaPool
adaPool copied to clipboard
反向传播的报错
File "/home/user/anaconda3/envs/torch_py38/lib/python3.8/site-packages/adaPool-0.1-py3.8-linux-x86_64.egg/adaPool/idea.py", line 206, in backward adapool_cuda.backward_1d_em(*saved) RuntimeError: input.is_contiguous()INTERNAL ASSERT FAILED at "CUDA/adapool_cuda.cpp":348, please report a bug to PyTorch. input must be a contiguous tensor
The operations you do after the block result in the gradients being stored in non-contiguous memory locations. You can try to avoid that by using .contiguous()
in the forward/backward passes before/after the layer call. Using torch.autograd.detect_anomaly
will also help you debug the error further.