nitrain
nitrain copied to clipboard
Cuda support for affine transforms
Seems like affine transforms don't support Cuda. Or am I missing something? Because this code
import torch
from torch.utils.data import DataLoader
from tqdm import tqdm
from torchsample import transforms, TensorDataset
my_transforms = transforms.RandomAffine(rotation_range=180,
translation_range=0.2,
shear_range=None,
zoom_range=(0.8, 1.2))
test_imgs = torch.from_numpy(my_images).float().cuda() # removing cuda() gets rid of the error
test_dataset = TensorDataset(test_imgs, input_transform=my_transforms)
test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False)
for _data in tqdm(test_loader, total=len(test_loader)):
print(index)
plot_sample(_data.squeeze_().cpu())
index += 1
Gives this error:
0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/sia/Desktop/radar-image-recognition/data_manipul.py", line 70, in <module>
for _data in tqdm(test_loader, total=len(test_loader)):
File "/home/sia/anaconda3/lib/python3.6/site-packages/tqdm/_tqdm.py", line 953, in __iter__
for obj in iterable:
File "/home/sia/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 179, in __next__
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/home/sia/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 179, in <listcomp>
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/home/sia/anaconda3/lib/python3.6/site-packages/torchsample-0.1.3-py3.6.egg/torchsample/datasets.py", line 250, in __getitem__
File "/home/sia/anaconda3/lib/python3.6/site-packages/torchsample-0.1.3-py3.6.egg/torchsample/datasets.py", line 250, in <listcomp>
File "/home/sia/anaconda3/lib/python3.6/site-packages/torchsample-0.1.3-py3.6.egg/torchsample/transforms/affine_transforms.py", line 92, in __call__
File "/home/sia/anaconda3/lib/python3.6/site-packages/torchsample-0.1.3-py3.6.egg/torchsample/transforms/affine_transforms.py", line 130, in __call__
File "/home/sia/anaconda3/lib/python3.6/site-packages/torchsample-0.1.3-py3.6.egg/torchsample/utils.py", line 131, in th_affine2d
File "/home/sia/anaconda3/lib/python3.6/site-packages/torchsample-0.1.3-py3.6.egg/torchsample/utils.py", line 174, in th_bilinear_interp2d
TypeError: gather received an invalid combination of arguments - got (int, !torch.LongTensor!), but expected (int dim, torch.cuda.LongTensor index)
After a bit of debugging, I think I get this error because the transform tensors are in CPU while my data is in GPU. Is there a way to move transform tensors to GPU? Or do I have to hack the code?
@siarez, @ncullen93 I do have the same error
`File "build/bdist.linux-x86_64/egg/torchsample/transforms/affine_transforms.py", line 322, in call File "build/bdist.linux-x86_64/egg/torchsample/utils.py", line 131, in th_affine2d
File "build/bdist.linux-x86_64/egg/torchsample/utils.py", line 174, in th_bilinear_interp2d
TypeError: gather received an invalid combination of arguments - got (int, torch.LongTensor), but expected (int dim, torch.cuda.LongTensor index) `
@siarez @RohitKeshari I added the following lines to the torchsample/functions/affine.py file and it works.
if cuda: coords = Variable(th_iterproduct(x.size(1),x.size(2),x.size(3)).float(), requires_grad=False).cuda() else: coords = Variable(th_iterproduct(x.size(1),x.size(2),x.size(3)).float(), requires_grad=False)
Find the function which is rising the error and add .cuda()
to the variable which is expected to be cuda tensor. Set the variable cuda
True if you want to run it on GPU.
It would be helpful to integrate cuda support more deeply into torchsample. For example, in torchsample/transforms/affine_transforms.py, instead of
zoom_matrix = th.FloatTensor([[zx, 0, 0],
[0, zy, 0],
[0, 0, 1]])
if we directly initialize it as
zoom_matrix = th.cuda.FloatTensor([[zx, 0, 0],
[0, zy, 0],
[0, 0, 1]])
we can avoid the overhead of converting a cpu Tensor to gpu.