Fooocus
Fooocus copied to clipboard
Add setup instructions for Windows (Intel GPUs) - ARC
I followed the install instructions on Windows 10 with Intel ARC A770 and it appears to work for me.
I can also confirm that following this steps on Win10 for Intel Arc 770 GPU works like a charm.
Linux on the other hand (Fedora 39) doing the same steps but installing torch extension according to this guide: Intel Extension for PyTorch is failing with the following exception: AssertionError("Torch not compiled with CUDA enabled")
. I'm trying to find a solution because I would love to get it working on linux.
@MaciejDromin if you receiving error AssertionError("Torch not compiled with CUDA enabled")
this means that you have wrong torch version or it was replaced when you installed dependencies on first run. Try to reinstall torch and other wheel files from Intel python -m pip install --upgrade torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
Also I got this error when intel_extension_for_pytorch was not imported, but if you have updated Fooocus version all should be ok.
Also you need Intel OpenAPI packages: intel-oneapi-dpcpp-cpp intel-oneapi-mkl-devel.
Before run add OpenAPI env vars to your Fooocus startup script or terminal session source {DPCPPROOT}/env/vars.sh
, source {MKLROOT}/env/vars.sh
, or you can try to add mkl libs to LD_LIBRARY_PATH env var like export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/oneapi/mkl/2024.0/lib; your_start_command.sh
I tried what You suggested, reinstalled torch, verified that i have intel-oneapi-dpcpp-cpp intel-oneapi-mkl-devel
packages and exported path but I'm still getting the same error :/
I followed the install instructions on Windows 10 with Intel ARC A770 and it appears to work for me.
wheres the install instructions?
@midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files
@midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files
just found them, will give a update if it runs smoothly :)
@midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files
just found them, will give a update if it runs smoothly :)
is it work for you ?
Yes but when I go and do inpaint to add a necklace it shuts my monitor off but pc keeps running so trying to figure it out
On Fri, Feb 16, 2024, 8:00 AM TheBoss16 @.***> wrote:
@midnitedestiny https://github.com/midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files
just found them, will give a update if it runs smoothly :)
is it work for you ?
— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/pull/2120#issuecomment-1948731537, or unsubscribe https://github.com/notifications/unsubscribe-auth/BGCW5AE4YOVAPMFZEMLFBBLYT57DFAVCNFSM6AAAAABCT5T4YCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBYG4ZTCNJTG4 . You are receiving this because you were mentioned.Message ID: @.***>
i could try on my amd mac if anyone wants?
Hi guys! I try the instructions in this PR.
I ran installer.bat successfuly!
I ran run.bat and i got the next message:
Do i need change the versions ?
I am using Windows 10, INTEL GPU and my hardware is:
Could you help me, please ? I've been trying for several days with differents ways.
Thanks in advance.
Try to reinstall torch and other wheel files from Intel
@MaciejDromin if you receiving error
AssertionError("Torch not compiled with CUDA enabled")
this means that you have wrong torch version or it was replaced when you installed dependencies on first run. Try to reinstall torch and other wheel files from Intelpython -m pip install --upgrade torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
Also I got this error when intel_extension_for_pytorch was not imported, but if you have updated Fooocus version all should be ok.
Also you need Intel OpenAPI packages: intel-oneapi-dpcpp-cpp intel-oneapi-mkl-devel. Before run add OpenAPI env vars to your Fooocus startup script or terminal session
source {DPCPPROOT}/env/vars.sh
,source {MKLROOT}/env/vars.sh
, or you can try to add mkl libs to LD_LIBRARY_PATH env var likeexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/oneapi/mkl/2024.0/lib; your_start_command.sh
Hi @cryscript, is the version C:\Users\charlie\Documents\Fooocus_2120pr>.\python_embeded\python.exe -s Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32
print(torch.version) 2.1.0a0+cxx11.abi properly for FOCUS with Intel GPU ?
In the second part when you mentioned, OpenAPI, is it for my case too ? And you mention two bash scripts but i am in windows, what about it ?
Intel UHD Graphics, 10GB memory
Thanks in advance!
Yes but when I go and do inpaint to add a necklace it shuts my monitor off but pc keeps running so trying to figure it out … On Fri, Feb 16, 2024, 8:00 AM TheBoss16 @.> wrote: @midnitedestiny https://github.com/midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files just found them, will give a update if it runs smoothly :) is it work for you ? — Reply to this email directly, view it on GitHub <#2120 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BGCW5AE4YOVAPMFZEMLFBBLYT57DFAVCNFSM6AAAAABCT5T4YCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBYG4ZTCNJTG4 . You are receiving this because you were mentioned.Message ID: @.>
Could i ask to you how ?
I created the install.bat and modified my run.bat, however i can get it.
Hi guys! I try the instructions in this PR. I ran installer.bat successfuly!
![]()
I ran run.bat and i got the next message:
Do i need change the versions ?
I am using Windows 10, INTEL GPU and my hardware is:
Could you help me, please ? I've been trying for several days with differents ways.
Thanks in advance.
your install bat is missing these there different then the one you have.
"https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"
"https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchvision-0.16.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"
"https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl""
https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl
My photo is bad, is not completely.
This is my installer.bat .\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y .\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl" .\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl" .\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchvision-0.16.0a0+cxx11.abi-cp310-cp310-win_amd64.whl" .\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl"
pause
I put .\python_embeded\python.exe -m pip install in every line for the doubts.
My run.bat is .\python_embeded\python.exe -s Fooocus\entry_with_update.py --unet-in-bf16 --vae-in-bf16 --clip-in-fp16 pause
My torch version:
C:\Users\charlie\Documents\Fooocus_win64_2-1-831\python_embeded>.\python.exe Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
import torch import torch print(torch.version) 2.1.0a0+cxx11.abi
I couldn't run it.
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in bootstrap_inner
File "threading.py", line 953, in run
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 25, in worker
import modules.default_pipeline as pipeline
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 1, in
The version installed by installer.bat don't use CUDA, right ?
Thanks in advance.
I have an Intel (R) Iris(R) Xe Graphics Famiy graphics card, and the operating system is Windows 11. How can I run Fooocus on this kind of laptop with a graphics card and if I have to download a graphics driver, which graphics driver should I download?
will foocus work on an Intel Arc A750?
Thank you for your outstanding work. I have installed the environment, but I keep getting stuck loading models to GPU. Is there any solution?
Alhamdulillah, my Fooocus is running smoothly, but I'm having trouble when I select FaceSwap and PyraCanny together—it just stops working. Any suggestions on how to fix this?
I'm Using Windows 11
GPU= INTEL Arc A750
I'm using an Intel Arc A750 GPU. Thanks!
Thank you for your outstanding work. I have installed the environment, but I keep getting stuck loading models to GPU. Is there any solution?
![]()
I am Getting the same error anyone here to help please.
Thank you for your outstanding work. I have installed the environment, but I keep getting stuck loading models to GPU. Is there any solution?
![]()
I am also facing similar error and issues. PFB the error logs.
reparation time: 27.41 seconds
[Sampler] refiner_swap_method = joint
2024-05-16 13:45:24,429 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7865/api/predict "HTTP/1.1 200 OK"
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
2024-05-16 13:45:24,455 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7865/api/predict "HTTP/1.1 200 OK"
Requested to load SDXL
Loading 1 new model
C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\intel_extension_for_pytorch\frontend.py:465: UserWarning: Conv BatchNorm folding failed during the optimize process.
warnings.warn(
C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\intel_extension_for_pytorch\frontend.py:472: UserWarning: Linear BatchNorm folding failed during the optimize process.
warnings.warn(
[Fooocus Model Management] Moving model(s) has taken 108.89 seconds
Traceback (most recent call last):
File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 913, in worker
handler(task)
File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 816, in handler
imgs = pipeline.process_diffusion(
File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 362, in process_diffusion
sampled_latent = core.ksampler(
File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\core.py", line 308, in ksampler
samples = ldm_patched.modules.sample.sample(model,
File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 712, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\sample_hijack.py", line 157, in sample_hacked
samples = sampler.sample(model_wrap, sigmas, extra_args, callback_wrap, noise, latent_image, denoise_mask, disable_pbar)
File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 545, in sample
noise = noise * torch.sqrt(1.0 + sigmas[0] ** 2.0)
RuntimeError: Native API failed. Native API returns: -999 (Unknown PI error) -999 (Unknown PI error)
2024-05-16 13:47:33,856 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7865/api/predict "HTTP/1.1 200 OK"
Total time: 192.54 seconds
Alhamdulillah, my Fooocus is running smoothly, but I'm having trouble when I select FaceSwap and PyraCanny together—it just stops working. Any suggestions on how to fix this?
I'm Using Windows 11
GPU= INTEL Arc A750
I'm using an Intel Arc A750 GPU. Thanks!
PLs share the installation steps that u did for Intel GPU.
Hello. I am a complete nube in installing anything the way Fooocus should be installed, and my GPU is Intel(R) HD Graphics. I've tried to follow instructions several times (created install.bat, changed run.bat and other files content), however when I run all these in cmd, I get some errors:
install.bat:
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
Found existing installation: torch 2.1.0a0+cxx11.abi
Uninstalling torch-2.1.0a0+cxx11.abi:
Successfully uninstalled torch-2.1.0a0+cxx11.abi
Found existing installation: torchvision 0.16.0a0+cxx11.abi
Uninstalling torchvision-0.16.0a0+cxx11.abi:
Successfully uninstalled torchvision-0.16.0a0+cxx11.abi
Found existing installation: torchaudio 2.1.0a0+cxx11.abi
Uninstalling torchaudio-2.1.0a0+cxx11.abi:
Successfully uninstalled torchaudio-2.1.0a0+cxx11.abi
WARNING: Skipping torchtext as it is not installed.
WARNING: Skipping functorch as it is not installed.
WARNING: Skipping xformers as it is not installed.
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl" "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl" "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchvision-0.16.0a0+cxx11.abi-cp310-cp310-win_amd64.whl" "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl"
Collecting torch==2.1.0a0+cxx11.abi
Downloading https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl (217.6 MB)
---------------------------------------- 217.6/217.6 MB 1.6 MB/s eta 0:00:00
Collecting torchaudio==2.1.0a0+cxx11.abi
Downloading https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl (2.3 MB)
---------------------------------------- 2.3/2.3 MB 8.2 MB/s eta 0:00:00
Collecting torchvision==0.16.0a0+cxx11.abi
Downloading https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchvision-0.16.0a0+cxx11.abi-cp310-cp310-win_amd64.whl (790 kB)
---------------------------------------- 790.5/790.5 kB 4.5 MB/s eta 0:00:00
Collecting intel-extension-for-pytorch==2.1.10+xpu
Downloading https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl (367.2 MB)
---------------------------------------- 367.2/367.2 MB 1.1 MB/s eta 0:00:00
Requirement already satisfied: filelock in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (3.12.2)
Requirement already satisfied: typing-extensions in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (4.7.1)
Requirement already satisfied: sympy in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (1.12)
Requirement already satisfied: networkx in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (3.1)
Requirement already satisfied: jinja2 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (3.1.2)
Requirement already satisfied: fsspec in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (2023.6.0)
Requirement already satisfied: numpy in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.16.0a0+cxx11.abi) (1.26.4)
Requirement already satisfied: requests in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.16.0a0+cxx11.abi) (2.31.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.16.0a0+cxx11.abi) (10.4.0)
Requirement already satisfied: psutil in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from intel-extension-for-pytorch==2.1.10+xpu) (6.0.0)
Requirement already satisfied: packaging in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from intel-extension-for-pytorch==2.1.10+xpu) (24.1)
Requirement already satisfied: pydantic in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from intel-extension-for-pytorch==2.1.10+xpu) (2.1.1)
Requirement already satisfied: MarkupSafe>=2.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from jinja2->torch==2.1.0a0+cxx11.abi) (2.1.3)
Requirement already satisfied: annotated-types>=0.4.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from pydantic->intel-extension-for-pytorch==2.1.10+xpu) (0.5.0)
Requirement already satisfied: pydantic-core==2.4.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from pydantic->intel-extension-for-pytorch==2.1.10+xpu) (2.4.0)
Requirement already satisfied: charset-normalizer<4,>=2 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from requests->torchvision==0.16.0a0+cxx11.abi) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from requests->torchvision==0.16.0a0+cxx11.abi) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from requests->torchvision==0.16.0a0+cxx11.abi) (2.0.3)
Requirement already satisfied: certifi>=2017.4.17 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from requests->torchvision==0.16.0a0+cxx11.abi) (2023.5.7)
Requirement already satisfied: mpmath>=0.19 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from sympy->torch==2.1.0a0+cxx11.abi) (1.3.0)
Installing collected packages: torch, torchvision, torchaudio
WARNING: The scripts convert-caffe2-to-onnx.exe, convert-onnx-to-caffe2.exe and torchrun.exe are installed in 'D:\Fooocus_win64_2-5-0\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
And run_realistic.bat:
D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --unet-in-bf16 --vae-in-bf16 --clip-in-fp16
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--unet-in-bf16', '--vae-in-bf16', '--clip-in-fp16']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.3
[Cleanup] Attempting to delete content of temp dir C:\Users\Gigabyte\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
D:\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
Traceback (most recent call last):
File "D:\Fooocus_win64_2-5-0\Fooocus\entry_with_update.py", line 46, in <module>
from launch import *
File "D:\Fooocus_win64_2-5-0\Fooocus\launch.py", line 147, in <module>
from webui import *
File "D:\Fooocus_win64_2-5-0\Fooocus\webui.py", line 10, in <module>
import modules.async_worker as worker
File "D:\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 3, in <module>
from extras.inpaint_mask import generate_mask_from_image, SAMOptions
File "D:\Fooocus_win64_2-5-0\Fooocus\extras\inpaint_mask.py", line 6, in <module>
from extras.GroundingDINO.util.inference import default_groundingdino
File "D:\Fooocus_win64_2-5-0\Fooocus\extras\GroundingDINO\util\inference.py", line 3, in <module>
import ldm_patched.modules.model_management as model_management
File "D:\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 121, in <module>
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
File "D:\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 90, in get_torch_device
return torch.device(torch.cuda.current_device())
File "D:\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 783, in current_device
_lazy_init()
File "D:\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
D:\Fooocus_win64_2-5-0>pause
Для продолжения нажмите любую клавишу . . .
Please help me with further advice or instructions. What do I do to successfully install and use Fooocus on my computer? (Windows 10)
Many Thanks for your attention and help!