deep-daze
deep-daze copied to clipboard
Windows 10 WSL Run Fails
If someone else has a guide on getting this to run on WSL that would be helpful! or just regular windows.
But I am running this in WSL and I get this error when I run the command imagine "a dog delivering pizza"
Traceback (most recent call last):
File "/usr/local/bin/imagine", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/dist-packages/deep_daze/cli.py", line 111, in main
fire.Fire(train)
File "/usr/local/lib/python3.8/dist-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/usr/local/lib/python3.8/dist-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/usr/local/lib/python3.8/dist-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/deep_daze/cli.py", line 73, in train
imagine = Imagine(
File "/usr/local/lib/python3.8/dist-packages/deep_daze/deep_daze.py", line 286, in __init__
self.clip_encoding = self.create_clip_encoding(text=text, img=img, encoding=clip_encoding)
File "/usr/local/lib/python3.8/dist-packages/deep_daze/deep_daze.py", line 309, in create_clip_encoding
encoding = self.create_text_encoding(text)
File "/usr/local/lib/python3.8/dist-packages/deep_daze/deep_daze.py", line 317, in create_text_encoding
text_encoding = perceptor.encode_text(tokenized_text).detach()
File "/usr/local/lib/python3.8/dist-packages/deep_daze/clip.py", line 526, in encode_text
x = self.transformer(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/deep_daze/clip.py", line 381, in forward
return self.resblocks(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 119, in forward
input = module(input)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/deep_daze/clip.py", line 368, in forward
x = x + self.attention(self.ln_1(x))
File "/usr/local/lib/python3.8/dist-packages/deep_daze/clip.py", line 365, in attention
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/activation.py", line 980, in forward
return F.multi_head_attention_forward(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 4633, in multi_head_attention_forward
q, k, v = linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`
I am getting the same error using Python 3.6 on Ubuntu 18.04.5.
...
File "/home/sander/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`
Key details of $ nvidia-smi -q
:
Driver Version : 450.102.04
CUDA Version : 11.0
Product Name : GeForce RTX 2060 SUPER
I'm also getting the same error with Python 3.7.6 on Ubuntu 18.04.5. CUDA version 11.2, 2x GeForce RTX 2080 Ti.
So far I've tried deleting the on-disk cache with sudo rm -rf ~/.nv
and running imagine
with the low memory recommendations, but still having the issue.