DALL-E icon indicating copy to clipboard operation
DALL-E copied to clipboard

Error in executing usage.ipynb

Open MadaraPremawardhana opened this issue 2 years ago • 2 comments

I have gotten the following error when running usage.ipynb

Output exceeds the size limit. Open the full output data in a text editor

AttributeError Traceback (most recent call last) Cell In[4], line 7 4 z = torch.argmax(z_logits, axis=1) 5 z = F.one_hot(z, num_classes=enc.vocab_size).permute(0, 3, 1, 2).float() ----> 7 x_stats = dec(z).float() 8 x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3])) 9 x_rec = T.ToPILImage(mode='RGB')(x_rec[0])

File c:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], []

File c:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\dall_e\decoder.py:94, in Decoder.forward(self, x) 91 if x.dtype != torch.float32: 92 raise ValueError('input must have dtype torch.float32') ---> 94 return self.blocks(x)

File c:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in ...

1268 return modules[name] -> 1269 raise AttributeError("'{}' object has no attribute '{}'".format( 1270 type(self).name, name))

AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

MadaraPremawardhana avatar Jan 21 '23 14:01 MadaraPremawardhana

I got the same error. i tried to fix it like here: https://github.com/openai/DALL-E/issues/76#issuecomment-1229197798 But it don't work

DarkApocalypse avatar Feb 07 '23 18:02 DarkApocalypse

I got the same error using cpu as argument, then I tried different options (cuda:0, cuda) for torch.device(), they all throws the same exception RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward) P.S. I used google's colab with GPU selected as runtime type, and torch.cuda.is_available() returns True. Looks like the model downloaded in the usage notebook has been created using different device!

bitsnaps avatar May 25 '23 09:05 bitsnaps