DeFooocus_colab.ipynb Suddenly Error and Need Help
Hi, I am new to this kind of AI tools and have 0 knowledge or experience on it before, I just find it on google and just used it for less than 1 month but it is suddenly give me error state like this:
[DeFooocus] Preparing ... Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done error: subprocess-exited-with-error
× Building wheel for pygit2 (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip. Building wheel for pygit2 (pyproject.toml) ... error ERROR: Failed building wheel for pygit2 error: failed-wheel-build-for-install
× Failed to build installable wheels for some pyproject.toml based projects ╰─> pygit2 /content fatal: destination path 'DeFooocus' already exists and is not an empty directory. /content/DeFooocus Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Installing build dependencies ... done error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip. Getting requirements to build wheel ... error error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip. [DeFooocus] Starting ... Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py', '--share', '--attention-split', '--always-high-vram', '--disable-offload-from-vram', '--all-in-fp16', '--theme', 'dark'] Python 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] Fooocus version: 0.2 Error checking version for torchsde: No package metadata was found for torchsde Installing requirements Couldn't install requirements. Command: "/usr/bin/python3" -m pip install -r "requirements_versions.txt" --prefer-binary Error code: 1 stdout: Collecting torchsde==0.2.6 (from -r requirements_versions.txt (line 1)) Using cached torchsde-0.2.6-py3-none-any.whl.metadata (5.3 kB) Collecting einops==0.4.1 (from -r requirements_versions.txt (line 2)) Using cached einops-0.4.1-py3-none-any.whl.metadata (10 kB) Collecting transformers==4.30.2 (from -r requirements_versions.txt (line 3)) Using cached transformers-4.30.2-py3-none-any.whl.metadata (113 kB) Collecting safetensors==0.3.1 (from -r requirements_versions.txt (line 4)) Using cached safetensors-0.3.1.tar.gz (34 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting accelerate==0.21.0 (from -r requirements_versions.txt (line 5)) Using cached accelerate-0.21.0-py3-none-any.whl.metadata (17 kB) Collecting pyyaml==6.0 (from -r requirements_versions.txt (line 6)) Using cached PyYAML-6.0.tar.gz (124 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'error'
stderr: error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
CMD Failed requirements: install -r "requirements_versions.txt"
Total VRAM 15095 MB, total RAM 12978 MB
Forcing FP16.
Set vram state to: HIGH_VRAM
Device: cuda:0 Tesla T4 : native
VAE dtype: torch.float32
Using split optimization for cross attention
Traceback (most recent call last):
File "/content/DeFooocus/entry_with_update.py", line 46, in
what wrong with this? can someone help me? i already try steps from google to resolve it but is still like this. I also send email to google technical team but that is nothing until now