DeLF-pytorch
DeLF-pytorch copied to clipboard
len() of a 0-d tensor
Hi, Thanks for releasing this code I try to train the model on my dataset but sometimes when I run the visualize notebook I get an error:
/home/ubuntu/DeLF-pytorch/helper/delf_helper.pyc in GetDelfFeatureFromSingleScale(x, model, scale, pca_mean, pca_vars, pca_matrix, pca_dims, rf, stride, padding, attn_thres, use_pca)
283 # use attention score to select feature.
284 indices = None
--> 285 while(indices is None or len(indices) == 0):
286 indices = torch.gt(scaled_scores, attn_thres).nonzero().squeeze()
287 attn_thres = attn_thres * 0.5 # use lower threshold if no indexes are found.
/usr/local/lib/python2.7/dist-packages/torch/tensor.pyc in __len__(self)
368 def __len__(self):
369 if self.dim() == 0:
--> 370 raise TypeError("len() of a 0-d tensor")
371 return self.shape[0]
any idea why ?
How about to change
while(indices is None or len(indices) == 0):
to
while(indices is None or indices.dim == 0):
I can fix this issue by this in pytorch==1.0.
How about to change
while(indices is None or len(indices) == 0):
towhile(indices is None or indices.dim == 0):
I can fix this issue by this in pytorch==1.0.
hello,whan you train PCA,do you get this problem?
Hi, Thanks for releasing this code I try to train the model on my dataset but sometimes when I run the visualize notebook I get an error:
/home/ubuntu/DeLF-pytorch/helper/delf_helper.pyc in GetDelfFeatureFromSingleScale(x, model, scale, pca_mean, pca_vars, pca_matrix, pca_dims, rf, stride, padding, attn_thres, use_pca) 283 # use attention score to select feature. 284 indices = None --> 285 while(indices is None or len(indices) == 0): 286 indices = torch.gt(scaled_scores, attn_thres).nonzero().squeeze() 287 attn_thres = attn_thres * 0.5 # use lower threshold if no indexes are found. /usr/local/lib/python2.7/dist-packages/torch/tensor.pyc in __len__(self) 368 def __len__(self): 369 if self.dim() == 0: --> 370 raise TypeError("len() of a 0-d tensor") 371 return self.shape[0]
any idea why ? hello,when you train PCA,do you get this problem?
Can you tell me the size of your gpu memory please? My gpu memory is 8G. However, out of memory always happen...
Can you tell me the size of your gpu memory please? My gpu memory is 8G. However, out of memory always happen...
it is 12G,and you can reduce the number of picture in your dataset
no no no , when I loaded the model , more than 5G memory is used. When the single picture is large than 10001000, it reports OOM! So I have to resize the image to 720720, but the result seems too bad~
Can you tell me the size of your gpu memory please? My gpu memory is 8G. However, out of memory always happen...
it is 12G,and you can reduce the number of picture in your dataset
If you are Chinese, can we talk on wechat ?
no no no , when I loaded the model , more than 5G memory is used. When the single picture is large than 1000_1000, it reports OOM! So I have to resize the image to 720_720, but the result seems too bad~
I resize the image to 128*128,maybe it's ok.
no no no , when I loaded the model , more than 5G memory is used. When the single picture is large than 1000_1000, it reports OOM! So I have to resize the image to 720_720, but the result seems too bad~
I resize the image to 128*128,maybe it's ok.
OK,I think I have found where I was wrong,if I have a good message,I will rely soon~
add a try...except block or judge if order.dim()==0,if so, i = order.item(), I fixed the problem thisway.