pytorch_sparse icon indicating copy to clipboard operation
pytorch_sparse copied to clipboard

Calling `spspmm` twice gives `CUDA error: an illegal memory access was encountered`

Open patmjen opened this issue 4 years ago • 11 comments
trafficstars

Summary

Running spspmm two times with the same inputs gives RuntimeError: CUDA error: an illegal memory access was encountered.

The following snippet shows the issue for me:

import torch
from torch_sparse import spspmm

# device = torch.device('cpu')  # This works
device = torch.device('cuda')  # This will error

# Make two simple sparse matrices
A_idx = torch.tensor([[0, 1], [0, 1]])
A_val = torch.tensor([1, 1]).float()

B_idx = torch.tensor([[0, 0, 1], [0, 1, 1]])
B_val = torch.tensor([2, 3, 4]).float()

# Transfer to device
print(f'To {device}')
A_idx = A_idx.to(device)
A_val = A_val.to(device)
B_idx = B_idx.to(device)
B_val = B_val.to(device)

# Do matrix multiplies
print('spspmm 1')
spspmm(A_idx, A_val, B_idx, B_val, 2, 2, 2, coalesced=True)  # This works
print('spspmm 2')
spspmm(A_idx, A_val, B_idx, B_val, 2, 2, 2, coalesced=True)  # On CUDA, this errors

When I run the above code, I get the following error:

To cuda
spspmm 1
spspmm 2
Traceback (most recent call last):
  File "sparsebug.py", line 25, in <module>
    spspmm(A_idx, A_val, B_idx, B_val, 2, 2, 2, )  # On CUDA, this errors
  File "venv/lib/python3.8/site-packages/torch_sparse/spspmm.py", line 30, in spspmm
    C = matmul(A, B)
  File "venv/lib/python3.8/site-packages/torch_sparse/matmul.py", line 139, in matmul
    return spspmm(src, other, reduce)
  File "venv/lib/python3.8/site-packages/torch_sparse/matmul.py", line 116, in spspmm
    return spspmm_sum(src, other)
  File "venv/lib/python3.8/site-packages/torch_sparse/matmul.py", line 101, in spspmm_sum
    rowptrC, colC, valueC = torch.ops.torch_sparse.spspmm_sum(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Sorry if this just me using the library wrongly! Is there something I should be doing in between calls to spspmm? Or any other way to fix it?

Environment

$ python collect_env.py
Collecting environment information...
PyTorch version: 1.9.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A

OS: Scientific Linux release 7.7 (Nitrogen) (x86_64)
GCC version: (GCC) 8.3.0
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17

Python version: 3.8.4 (default, Jul 16 2020, 09:01:13)  [GCC 8.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.2.5
Is CUDA available: True
CUDA runtime version: 11.1.74
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB

Nvidia driver version: 470.42.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] pytorch3d==0.5.0
[pip3] torch==1.9.0+cu111
[pip3] torch-scatter==2.0.8
[pip3] torch-sparse==0.6.12
[pip3] torchvision==0.10.0+cu111
[conda] Could not collect

patmjen avatar Sep 22 '21 13:09 patmjen

Weird, it works for me, using CUDA 11.1. Does running with CUDA_LAUNCH_BLOCKING=1 give you a more reasonable error message? Is it possible for you to determine which call in spspmm_cuda.cu accesses illegal memory?

rusty1s avatar Sep 23 '21 07:09 rusty1s

Unfortunately no, adding CUDA_LAUNCH_BLOCKING=1 does not change the error (except that it doesn't suggest using CUDA_LAUNCH_BLOCKING=1 now).

Is there a way I could determine what call accesses illegal memory without recompiling etc.? I suspect no, but no harm in asking.

What graphics card are you using? I once had to deal with a bug that only showed up on newer cards (despite using same CUDA version), since they had updated how some illegal operations were handled. On the old cards, the illegal operation was ignored (so I did not discover it), but not on the newer ones which caused the bug to pop up there. Maybe it's something similar here?

patmjen avatar Sep 23 '21 11:09 patmjen

I think you have to re-compile to perform some further debugging. I have tested it on 1080Ti, 2080Ti and Titan RTX and they all work fine.

@JiaxuanYou, @RexYing: Can you also check if you can reproduce this issue?

rusty1s avatar Sep 24 '21 07:09 rusty1s

I also just tested it on an NVIDIA GeForce RTX 2070 Super card on my Windows 10 machine. Here, the bug does not show up. So maybe it is dependent on the card.

Unfortunately, I don't have time to do further debugging in the near future. Sorry! I know this makes it hard to proceed, so if you want you can close the issue.

patmjen avatar Sep 24 '21 09:09 patmjen

Thanks for reporting. I'm still leaving this issue open. If someone else has the same problem and is willing to debug, we can hopefully fix this.

rusty1s avatar Sep 27 '21 06:09 rusty1s

Anybody still working on this? Ran into the same issue whilst deploying Graph-UNET, which relies on spspmm. Could perhaps try and debug.

thijssnelleman avatar Apr 04 '22 10:04 thijssnelleman

It would be of much help if you can try to debug :)

rusty1s avatar Apr 04 '22 14:04 rusty1s

I felt the same error. can anyone address this issue?

daeunni avatar Apr 30 '22 10:04 daeunni

Does this mean that https://github.com/rusty1s/pytorch_sparse/issues/228 is resolved for you?

rusty1s avatar May 02 '22 06:05 rusty1s

@thijssnelleman how did you solve the issue?

andreimargeloiu avatar Dec 18 '22 15:12 andreimargeloiu

I believe I replaced the layer that made use of this function with another layer.. Not much of a solution but worked in my situation.

thijssnelleman avatar Dec 19 '22 13:12 thijssnelleman