pytorch_sparse icon indicating copy to clipboard operation
pytorch_sparse copied to clipboard

spspmm lead to error: PyTorch CUDA error: an illegal memory access was encountered.

Open GooLiang opened this issue 1 year ago • 27 comments

Hi, I'm having the same problem with #174. I have two large adjacency matrices, the details are as follows adj_l SparseTensor(row=tensor([ 0, 0, 0, ..., 736388, 736388, 736388], device='cuda:2'), col=tensor([ 145, 2215, 3205, ..., 21458, 22283, 31934], device='cuda:2'), val=tensor([0.0909, 0.0909, 0.0909, ..., 0.1000, 0.1000, 0.1000], device='cuda:2'), size=(736389, 59965), nnz=7505078, density=0.02%) adj_r SparseTensor(row=tensor([ 0, 0, 0, ..., 59962, 59963, 59964], device='cuda:2'), col=tensor([222683, 370067, 430465, ..., 38176, 514545, 334613], device='cuda:2'), val=tensor([0.1429, 0.1429, 0.1429, ..., 0.5000, 1.0000, 1.0000], device='cuda:2'), size=(59965, 736389), nnz=7505078, density=0.02%)

Convert them to sparse format and use the following code, rowA, colA, _ = adj_l.coo() rowB, colB, _ = adj_r.coo() indexA = torch. stack((rowA,colA)) indexB = torch. stack((rowB,colB)) valueA = adj_l.storage._value valueB = adj_r.storage._value indexC, valueC = spspmm(indexA, valueA, indexB, valueB, adj_l.size(0), adj_l.size(1), adj_r.size(1), coalesced=True) Then an error will be reported. CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Even with CUDA_LAUNCH_BLOCKING=1. There is no more information, I believe this is caused by too much memory for the two sparse matrices. Is there any way to run it on gpu?

GooLiang avatar Feb 28 '23 12:02 GooLiang

What version of torch-sparse are you working on? 0.6.16 had some fixes to the spspmm routine, see https://github.com/rusty1s/pytorch_sparse/releases.

rusty1s avatar Mar 01 '23 08:03 rusty1s

Thank you for your reply. But my cuda version is 10.1 torch version is 1.12. Can I solve this problem without upgrading?

GooLiang avatar Mar 11 '23 09:03 GooLiang

You mean without upgrading CUDA? You should be able to install from wheels via pip -f ... command, while manual compilation will probably fail due to CUDA version conflict. Let me know if I am missing something.

rusty1s avatar Mar 11 '23 10:03 rusty1s

I mean if I want to install torch-sparse 0.6.16, I need the following dependency ring: torch1.13->cuda11.6.
However, my cuda only works with version 10.2 due to some utility restrictions, so I don't think I can install torch-sparse 6.16. Is there a way to support torch-sparse 6.16 under cuda10.1?

GooLiang avatar Mar 11 '23 10:03 GooLiang

The CUDA version needs to match with the one installed by PyTorch, not necessarily your system CUDA.

rusty1s avatar Mar 11 '23 11:03 rusty1s

I tried to upgrade torch-sparse 0.6.16, however, I got a new error when running the previous code, is there any solution?

RuntimeError (note: full exception trace is shown but execution is paused at: _run_module_as_main) CUDA error: insufficient resources when calling cusparseSpGEMM_workEstimation( handle, opA, opB, &alpha, matA, matB, &beta, matC, computeType, CUSPARSE_SPGEMM_DEFAULT, spgemmDesc, &bufferSize1, dBuffer1) File "/root/anaconda3/envs/lyx/lib/python3.7/site-packages/torch_sparse/matmul.py", line 96, in spspmm_sum C = torch.sparse.mm(A, B) File "/root/anaconda3/envs/lyx/lib/python3.7/site-packages/torch_sparse/matmul.py", line 120, in spspmm return spspmm_sum(src, other) File "/root/anaconda3/envs/lyx/lib/python3.7/site-packages/torch_sparse/matmul.py", line 143, in matmul return spspmm(src, other, reduce) File "/root/anaconda3/envs/lyx/lib/python3.7/site-packages/torch_sparse/matmul.py", line 151, in self, other, reduce) File "/home/public/lyx/Nars_ensemble/ogbn/data_ogbn.py", line 251, in hg_propagate_sparse_pyg_freebase new_adjs[name] = adj_l.matmul(adj_r.to(prop_device)).to(store_device) File "/home/public/lyx/Nars_ensemble/ogbn/train_ogbn.py", line 99, in main features_list_dict, extra_features_buffer = hg_propagate_sparse_pyg_freebase(adjs, threshold_metalen, tgt_type, args.num_hops, max_length, extra_metapath, prop_device, prop_feats=True, echo=True) File "/home/public/lyx/Nars_ensemble/ogbn/train_ogbn.py", line 261, in main(args) File "/root/anaconda3/envs/lyx/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/root/anaconda3/envs/lyx/lib/python3.7/runpy.py", line 193, in _run_module_as_main (Current frame) "main", mod_spec) RuntimeError: CUDA error: insufficient resources when calling cusparseSpGEMM_workEstimation( handle, opA, opB, &alpha, matA, matB, &beta, matC, computeType, CUSPARSE_SPGEMM_DEFAULT, spgemmDesc, &bufferSize1, dBuffer1)

GooLiang avatar Mar 12 '23 01:03 GooLiang

What does torch_sparse.__version__ return? The new routine in 0.6.16 doesn't make any use of cusparse anymore, so this error is a bit irritating to me.

rusty1s avatar Mar 12 '23 08:03 rusty1s

torch_sparse.version return '0.6.16+pt113cu116'

GooLiang avatar Mar 12 '23 08:03 GooLiang

Hi, do you have any idea about this problem? thanks for help

GooLiang avatar Mar 14 '23 13:03 GooLiang

Mh, can you show me the content of /root/anaconda3/envs/lyx/lib/python3.7/site-packages/torch_sparse/matmul.py?

rusty1s avatar Mar 15 '23 11:03 rusty1s

matmul.txt

GooLiang avatar Mar 15 '23 11:03 GooLiang

Mh, looks like this is an issue with PyTorch then, not with torch_sparse. I assume that

import torch

A = torch.randn(5, 5).to_torch_coo_tensor().cuda()
torch.sparse.mm(A, A)

also fails for you?

rusty1s avatar Mar 15 '23 12:03 rusty1s

Yes, running the above code will report the following error. AttributeError: 'Tensor' object has no attribute 'to_torch_coo_tensor' However, my torch has been upgraded to 1.13.1. use torch.version return: '1.13.1+cu116'

GooLiang avatar Mar 15 '23 12:03 GooLiang

Needs to be A = torch.randn(5, 5).to_sparse().cuda(), sorry for the confusion.

rusty1s avatar Mar 15 '23 12:03 rusty1s

Running the above code is successful :(

GooLiang avatar Mar 15 '23 12:03 GooLiang

Then I am at a loss :(

What happens if you run

adj_l @ adj_r

in your code above?

rusty1s avatar Mar 15 '23 12:03 rusty1s

Suppose there are five different data adj_l, adj_r. After running the first four without any problem, running the fifth one will report the error mentioned at the beginning of my question. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

After I upgraded the torch and torch_sparse, I ran it again and got the second error: RuntimeError: CUDA error: insufficient resources when calling cusparseSpGEMM_workEstimation( handle, opA, opB, &alpha, matA, matB, &beta, matC, computeType, CUSPARSE_SPGEMM_DEFAULT, spgemmDesc, &bufferSize1, dBuffer1)

GooLiang avatar Mar 15 '23 13:03 GooLiang

Do you have a reproducible example? Happy to look into it.

rusty1s avatar Mar 15 '23 13:03 rusty1s

I have uploaded adl_l and adj_r to google cloud. You can download these two data and run: adj_l.to('cuda:0').matmul(adj_l.to('cuda:0')) https://drive.google.com/drive/folders/1JPjktlEP-mdacuGiBDbdWluBTA32Oiqn?usp=share_link Thanks for your help!

GooLiang avatar Mar 18 '23 15:03 GooLiang

Thanks. Will look into it.

rusty1s avatar Mar 21 '23 20:03 rusty1s

Hi, Sorry to bother you. Is there any update on this issue?

GooLiang avatar Mar 27 '23 11:03 GooLiang

I can reproduce this :( I assume that your matrices are too large for torch.sparse.mm. You can see that

adj_l = adj_l.to_torch_sparse_csr_tensor()
adj_r = adj_r.to_torch_sparse_csr_tensor()
out = adj_l @ adj_r

also fails, while something like

adj_l = adj_l[:10000]
adj_r = adj_r[:, :10000]
out = adj_l @ adj_r

works. I suggest to create a similar issue in https://github.com/pytorch/pytorch.

rusty1s avatar Mar 28 '23 13:03 rusty1s

This dataset is actually data from ogbn-mag's PF and FP relationship, I noticed that your work also appears on the mag early list, maybe I'll study your previous work to see how to use torch_sparse to support the mag dataset. Anyway, thanks for your help!

GooLiang avatar Mar 30 '23 01:03 GooLiang

We never used sparse-sparse matrix multiplication in our benchmarks, so this issue never appeared to us.

rusty1s avatar Mar 30 '23 05:03 rusty1s

This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved?

github-actions[bot] avatar Sep 27 '23 01:09 github-actions[bot]

Hi, Sorry to bother you. Is there any update on this issue?

GooLiang avatar Jan 12 '24 13:01 GooLiang

We are using the PyTorch routine now for SpSpMM, so this is either no longer an issue or needs to be routed to the PyTorch team directly.

rusty1s avatar Jan 12 '24 13:01 rusty1s