tldream icon indicating copy to clipboard operation
tldream copied to clipboard

Torch not compiled with CUDA enabled

Open Tobe2d opened this issue 2 years ago • 5 comments

I am using the 1-Click Windows Installer When I run 'win_config.bat' it was all good and no errors. However, when I run 'win_start.bat' it show Torch not compiled with CUDA enabled...

Log:

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.0.0+cpu)
    Python  3.10.10 (you have 3.10.9)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
2023-04-09 20:16:31.534 | INFO     | tldream.server:main:153 - tldream 0.6.1
2023-04-09 20:16:31.535 | INFO     | tldream.server:main:154 - Model cache dir: C:\Users\xxx\.cache\huggingface\hub
2023-04-09 20:16:31.540 | INFO     | tldream.util:init_pipe:102 - Loading model: runwayml/stable-diffusion-v1-5
vae\diffusion_pytorch_model.safetensors not found
Fetching 15 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 15001.09it/s]
E:\lama-tldream\installer\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ E:\lama-tldream\installer\lib\site-packages\tldream\__init__.py:86 in start          │
│                                                                                                  │
│    83 │                                                                                          │
│    84 │   from .server import main                                                               │
│    85 │                                                                                          │
│ ❱  86 │   main(                                                                                  │
│    87 │   │   listen=listen,                                                                     │
│    88 │   │   port=port,                                                                         │
│    89 │   │   device=device,                                                                     │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\tldream\server.py:167 in main            │
│                                                                                                  │
│   164 │   _torch_dtype = torch_dtype                                                             │
│   165 │                                                                                          │
│   166 │   # TODO: lazy load model after server started to get download progress                  │
│ ❱ 167 │   controlled_model = init_pipe(                                                          │
│   168 │   │   model,                                                                             │
│   169 │   │   device,                                                                            │
│   170 │   │   torch_dtype=torch_dtype,                                                           │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\tldream\util.py:124 in init_pipe         │
│                                                                                                  │
│   121 │   if cpu_offload:                                                                        │
│   122 │   │   pipe.enable_sequential_cpu_offload()                                               │
│   123 │   else:                                                                                  │
│ ❱ 124 │   │   pipe.to(device)                                                                    │
│   125 │                                                                                          │
│   126 │   if shared.use_xformers:                                                                │
│   127 │   │   pipe.enable_xformers_memory_efficient_attention()                                  │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\diffusers\pipelines\pipeline_utils.py:39 │
│ 6 in to                                                                                          │
│                                                                                                  │
│    393 │   │   │   │   │   │   " support for`float16` operations on this device in PyTorch. Ple  │
│    394 │   │   │   │   │   │   " `torch_dtype=torch.float16` argument, or use another device fo  │
│    395 │   │   │   │   │   )                                                                     │
│ ❱  396 │   │   │   │   module.to(torch_device)                                                   │
│    397 │   │   return self                                                                       │
│    398 │                                                                                         │
│    399 │   @property                                                                             │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\transformers\modeling_utils.py:1811 in   │
│ to                                                                                               │
│                                                                                                  │
│   1808 │   │   │   │   " model has already been set to the correct devices and casted to the co  │
│   1809 │   │   │   )                                                                             │
│   1810 │   │   else:                                                                             │
│ ❱ 1811 │   │   │   return super().to(*args, **kwargs)                                            │
│   1812 │                                                                                         │
│   1813 │   def half(self, *args):                                                                │
│   1814 │   │   # Checks if the model has been loaded in 8-bit                                    │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\torch\nn\modules\module.py:1145 in to    │
│                                                                                                  │
│   1142 │   │   │   │   │   │   │   non_blocking, memory_format=convert_to_format)                │
│   1143 │   │   │   return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No  │
│   1144 │   │                                                                                     │
│ ❱ 1145 │   │   return self._apply(convert)                                                       │
│   1146 │                                                                                         │
│   1147 │   def register_full_backward_pre_hook(                                                  │
│   1148 │   │   self,                                                                             │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│                                                                                                  │
│    794 │                                                                                         │
│    795 │   def _apply(self, fn):                                                                 │
│    796 │   │   for module in self.children():                                                    │
│ ❱  797 │   │   │   module._apply(fn)                                                             │
│    798 │   │                                                                                     │
│    799 │   │   def compute_should_use_set_data(tensor, tensor_applied):                          │
│    800 │   │   │   if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):           │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│                                                                                                  │
│    794 │                                                                                         │
│    795 │   def _apply(self, fn):                                                                 │
│    796 │   │   for module in self.children():                                                    │
│ ❱  797 │   │   │   module._apply(fn)                                                             │
│    798 │   │                                                                                     │
│    799 │   │   def compute_should_use_set_data(tensor, tensor_applied):                          │
│    800 │   │   │   if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):           │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\torch\nn\modules\module.py:797 in _apply │
│                                                                                                  │
│    794 │                                                                                         │
│    795 │   def _apply(self, fn):                                                                 │
│    796 │   │   for module in self.children():                                                    │
│ ❱  797 │   │   │   module._apply(fn)                                                             │
│    798 │   │                                                                                     │
│    799 │   │   def compute_should_use_set_data(tensor, tensor_applied):                          │
│    800 │   │   │   if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):           │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\torch\nn\modules\module.py:820 in _apply │
│                                                                                                  │
│    817 │   │   │   # track autograd history of `param_applied`, so we have to use                │
│    818 │   │   │   # `with torch.no_grad():`                                                     │
│    819 │   │   │   with torch.no_grad():                                                         │
│ ❱  820 │   │   │   │   param_applied = fn(param)                                                 │
│    821 │   │   │   should_use_set_data = compute_should_use_set_data(param, param_applied)       │
│    822 │   │   │   if should_use_set_data:                                                       │
│    823 │   │   │   │   param.data = param_applied                                                │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\torch\nn\modules\module.py:1143 in       │
│ convert                                                                                          │
│                                                                                                  │
│   1140 │   │   │   if convert_to_format is not None and t.dim() in (4, 5):                       │
│   1141 │   │   │   │   return t.to(device, dtype if t.is_floating_point() or t.is_complex() els  │
│   1142 │   │   │   │   │   │   │   non_blocking, memory_format=convert_to_format)                │
│ ❱ 1143 │   │   │   return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No  │
│   1144 │   │                                                                                     │
│   1145 │   │   return self._apply(convert)                                                       │
│   1146                                                                                           │
│                                                                                                  │
│ E:\lama-tldream\installer\lib\site-packages\torch\cuda\__init__.py:239 in _lazy_init │
│                                                                                                  │
│    236 │   │   │   │   "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "       │
│    237 │   │   │   │   "multiprocessing, you must use the 'spawn' start method")                 │
│    238 │   │   if not hasattr(torch._C, '_cuda_getDeviceCount'):                                 │
│ ❱  239 │   │   │   raise AssertionError("Torch not compiled with CUDA enabled")                  │
│    240 │   │   if _cudart is None:                                                               │
│    241 │   │   │   raise AssertionError(                                                         │
│    242 │   │   │   │   "libcudart functions unavailable. It looks like you have a broken build?  │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Torch not compiled with CUDA enabled
Press any key to continue . . .

Tobe2d avatar Apr 09 '23 16:04 Tobe2d

Thanks for your support! Fixing the version of xformers in win_config will solve this problem

@call pip install xformers==0.0.16

Full win_config.bat script

@echo off

set PATH=C:\Windows\System32;%PATH%

@call installer\Scripts\activate.bat

@call conda-unpack

@call conda install -y -c conda-forge cudatoolkit=11.7
@call pip install torch==1.13.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117
@call pip install xformers==0.0.16
@call pip3 install -U tldream

tldream --start-web-config --config-file %0\..\installer_config.json

PAUSE

Sanster avatar Apr 10 '23 14:04 Sanster

Thanks @Sanster it fixed it however now I am getting some warning:

E:\tldream\installer\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` wil

Tobe2d avatar Apr 10 '23 18:04 Tobe2d

And some items not found on the log:

INFO:     Uvicorn running on http://127.0.0.1:4242 (Press CTRL+C to quit)
INFO:     127.0.0.1:65174 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:65174 - "GET /_next/static/css/fb13a1146f6a82a5.css HTTP/1.1" 200 OK
INFO:     127.0.0.1:65174 - "GET /_next/static/chunks/webpack-a45dcddfcffa5992.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65175 - "GET /_next/static/chunks/framework-0f4b6e2ddffaf68b.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65178 - "GET /_next/static/chunks/main-93d21f3896dc1d91.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65179 - "GET /_next/static/chunks/pages/_app-86df0457e8e08c70.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65180 - "GET /_next/static/chunks/pages/index-0f8676ad5175418d.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65181 - "GET /_next/static/UfOBmMuC1Vh5Vh3fh-WnP/_buildManifest.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65174 - "GET /_next/static/UfOBmMuC1Vh5Vh3fh-WnP/_ssgManifest.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65175 - "GET /_next/static/UfOBmMuC1Vh5Vh3fh-WnP/_middlewareManifest.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65175 - "GET /_next/static/chunks/7dae1ac5.882dda22b44f960b.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65174 - "GET /_next/static/chunks/147.f78af6f868637eea.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65181 - "GET /_next/static/chunks/256.a1d9847890eda152.js HTTP/1.1" 200 OK
INFO:     127.0.0.1:65178 - "GET /_next/static/css/7ee352fbfec876f3.css HTTP/1.1" 200 OK
INFO:     127.0.0.1:65178 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO:     ('127.0.0.1', 65187) - "WebSocket /socket.io/?EIO=4&transport=websocket" [accepted]
INFO:     connection open
INFO:     127.0.0.1:65181 - "GET /_next/static/media/recursive-latin-400-normal.ad5f3e31.woff2 HTTP/1.1" 200 OK
INFO:     127.0.0.1:65189 - "GET /manifest.json HTTP/1.1" 404 Not Found
INFO:     127.0.0.1:65181 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO:     127.0.0.1:65174 - "GET /sw.js HTTP/1.1" 404 Not Found

Tobe2d avatar Apr 10 '23 18:04 Tobe2d

You can ignore these logs and access 127.0.0.1:4242 in the browser to use tldream

Sanster avatar Apr 11 '23 00:04 Sanster

Thank you @Sanster

One thing, if you can add auto launch and to chose desktopmode or web it will be amazing just like in lama-cleaner

Tobe2d avatar Apr 11 '23 07:04 Tobe2d