tutorials
tutorials copied to clipboard
[BUG] - unpack error in CUDA extension tutorial
Add Link
https://pytorch.org/tutorials/advanced/cpp_extension.html
Describe the bug
Code to reproduce the issue.
import time
import torch
batch_size = 16
input_features = 32
state_size = 128
# Check if CUDA (GPU) is available
if torch.cuda.is_available():
# Set the device to CUDA
device = torch.device("cuda")
print("CUDA is available. Using GPU.")
else:
# If CUDA is not available, fall back to CPU
device = torch.device("cpu")
print("CUDA is not available. Using CPU.")
X = torch.randn(batch_size, input_features, device=device)
h = torch.randn(batch_size, state_size, device=device)
C = torch.randn(batch_size, state_size, device=device)
rnn = LLTM(input_features, state_size).to(device)
forward = 0
backward = 0
for _ in range(2):
start = time.time()
new_h, new_C = rnn(X, (h, C))
forward += time.time() - start
start = time.time()
(new_h.sum() + new_C.sum()).backward()
backward += time.time() - start
print('Forward: {:.3f} s | Backward {:.3f} s'.format(forward, backward))
Error info:
Traceback (most recent call last):
File "toy_model.py", line 74, in <module>
(new_h.sum() + new_C.sum()).backward()
File "/home/username/miniconda3/envs/tmp/lib/python3.8/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/username/miniconda3/envs/tmp/lib/python3.8/site-packages/torch/autograd/__init__.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/username/miniconda3/envs/tmp/lib/python3.8/site-packages/torch/autograd/function.py", line 288, in apply
return user_fn(self, *args)
File "toy_model.py", line 25, in backward
d_old_h, d_input, d_weights, d_bias, d_old_cell = outputs
ValueError: too many values to unpack (expected 5)
Describe your environment
- Platform: Ubuntu
- CUDA: yes
- Pytorch Version: 2.1.0+cu118