text_gcn.pytorch icon indicating copy to clipboard operation
text_gcn.pytorch copied to clipboard

TypeError: cuda() missing 1 required positional argument: 'self'

Open dcheason opened this issue 5 years ago • 5 comments

when i run python train.py mr, error reporting Traceback (most recent call last): File "train.py", line 82, in model_func = model_func.cuda() TypeError: cuda() missing 1 required positional argument: 'self' Do you know how to solve the error?

dcheason avatar Nov 14 '19 09:11 dcheason

do you deal with it?

TianlinZhang668 avatar Nov 27 '19 07:11 TianlinZhang668

when i run python train.py R8, error reporting THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=157 error=711 : peer mapping resources exhausted Traceback (most recent call last): File "train.py", line 152, in logits = model(t_features) File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 148, in forward inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 159, in scatter return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 36, in scatter_kwargs inputs = scatter(inputs, target_gpus, dim) if inputs else [] File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 28, in scatter res = scatter_map(inputs) File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 15, in scatter_map return list(zip(*map(scatter_map, obj))) File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 13, in scatter_map return Scatter.apply(target_gpus, None, dim, obj) File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 89, in forward outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams) File "/raid/home/lcq/.pyenv/versions/3.6.4/lib/python3.6/site-packages/torch/cuda/comm.py", line 147, in scatter return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams)) RuntimeError: cuda runtime error (711) : peer mapping resources exhausted at /pytorch/aten/src/THC/THCGeneral.cpp:157 Do you know how to solve the error?

ZLCQ avatar Nov 27 '19 09:11 ZLCQ

when i run python train.py mr, error reporting Traceback (most recent call last): File "train.py", line 82, in model_func = model_func.cuda() TypeError: cuda() missing 1 required positional argument: 'self' Do you know how to solve the error?

I am facing the same issue. But for now I have commented the following section in train.py file so I could try the code.

''' if torch.cuda.is_available(): model_func = model_func.cuda() t_features = t_features.cuda() t_y_train = t_y_train.cuda() t_y_val = t_y_val.cuda() t_y_test = t_y_test.cuda() t_train_mask = t_train_mask.cuda() tm_train_mask = tm_train_mask.cuda() for i in range(len(support)): t_support = [t.cuda() for t in t_support if True] '''

It is from line 82-93.

chetankm1992 avatar Dec 10 '19 21:12 chetankm1992

I have the same problem (D:\Anaconda3\envs\gcnPytorch) D:\PycharmProjects\text_gcn.pytorch-master>python train.py R8 Traceback (most recent call last): File "train.py", line 81, in model_func = model_func.cuda() TypeError: cuda() missing 1 required positional argument: 'self' However, how to solve the error?

JadenFK avatar Dec 17 '19 15:12 JadenFK

Go to train.py file and comment the following section and then execute the code again. ''' if torch.cuda.is_available(): model_func = model_func.cuda() t_features = t_features.cuda() t_y_train = t_y_train.cuda() t_y_val = t_y_val.cuda() t_y_test = t_y_test.cuda() t_train_mask = t_train_mask.cuda() tm_train_mask = tm_train_mask.cuda() for i in range(len(support)): t_support = [t.cuda() for t in t_support if True] '''

chetankm1992 avatar Dec 20 '19 01:12 chetankm1992