Runtime Error
I am getting this error. I have installed triton but still the same
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f2.0.1v1.10.1-previous-659-gc055f2d4 Commit hash: c055f2d43b07cbfd87ac3da4899a6d7ee52ebab9 Installing requirements loading WD14-tagger reqs from L:\webui_forge_cu121_torch231\webui\extensions\stable-diffusion-webui-wd14-tagger\requirements.txt Checking WD14-tagger requirements. Launching Web UI with arguments: --xformers --cuda-malloc --opt-sdp-no-mem-attention --medvram Arg --medvram is removed in Forge. Now memory management is fully automatic and you do not need any command flags. Please just remove this flag. In extreme cases, if you want to force previous lowvram/medvram behaviors, please use --always-offload-from-vram Using cudaMallocAsync backend. Total VRAM 8188 MB, total RAM 32472 MB pytorch version: 2.3.1+cu121 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 Laptop GPU : cudaMallocAsync VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16 CUDA Using Stream: False L:\webui_forge_cu121_torch231\system\python\lib\site-packages\triton\windows_utils.py:315: UserWarning: Failed to find Python libs. warnings.warn("Failed to find Python libs.") C:/Users/BULUT~1.HAR/AppData/Local/Temp/tmpteqycs5k/cuda_utils.c:14: error: include file 'Python.h' not found Failed to compile. cc_cmd: ['L:\\webui_forge_cu121_torch231\\system\\python\\Lib\\site-packages\\triton\\runtime\\tcc\\tcc.exe', 'C:\\Users\\BULUT~1.HAR\\AppData\\Local\\Temp\\tmpteqycs5k\\cuda_utils.c', '-O3', '-shared', '-Wno-psabi', '-o', 'C:\\Users\\BULUT~1.HAR\\AppData\\Local\\Temp\\tmpteqycs5k\\cuda_utils.cp310-win_amd64.pyd', '-fPIC', '-lcuda', '-lpython3', '-LL:\\webui_forge_cu121_torch231\\system\\python\\Lib\\site-packages\\triton\\backends\\nvidia\\lib', '-LC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\lib\\x64', '-IL:\\webui_forge_cu121_torch231\\system\\python\\Lib\\site-packages\\triton\\backends\\nvidia\\include', '-IC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\include', '-IC:\\Users\\BULUT~1.HAR\\AppData\\Local\\Temp\\tmpteqycs5k', '-IL:\\webui_forge_cu121_torch231\\system\\python\\Include'] C:/Users/BULUT~1.HAR/AppData/Local/Temp/tmpopo1fhjl/cuda_utils.c:14: error: include file 'Python.h' not found Failed to compile. cc_cmd: ['L:\\webui_forge_cu121_torch231\\system\\python\\Lib\\site-packages\\triton\\runtime\\tcc\\tcc.exe', 'C:\\Users\\BULUT~1.HAR\\AppData\\Local\\Temp\\tmpopo1fhjl\\cuda_utils.c', '-O3', '-shared', '-Wno-psabi', '-o', 'C:\\Users\\BULUT~1.HAR\\AppData\\Local\\Temp\\tmpopo1fhjl\\cuda_utils.cp310-win_amd64.pyd', '-fPIC', '-lcuda', '-lpython3', '-LL:\\webui_forge_cu121_torch231\\system\\python\\Lib\\site-packages\\triton\\backends\\nvidia\\lib', '-LC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\lib\\x64', '-IL:\\webui_forge_cu121_torch231\\system\\python\\Lib\\site-packages\\triton\\backends\\nvidia\\include', '-IC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\include', '-IC:\\Users\\BULUT~1.HAR\\AppData\\Local\\Temp\\tmpopo1fhjl', '-IL:\\webui_forge_cu121_torch231\\system\\python\\Include'] L:\webui_forge_cu121_torch231\system\python\lib\site-packages\transformers\utils\hub.py:128: FutureWarning: Using TRANSFORMERS_CACHEis deprecated and will be removed in v5 of Transformers. UseHF_HOME` instead.
warnings.warn(
C:/Users/BULUT~1.HAR/AppData/Local/Temp/tmpsvvei_go/cuda_utils.c:14: error: include file 'Python.h' not found
Failed to compile. cc_cmd: ['L:\webui_forge_cu121_torch231\system\python\Lib\site-packages\triton\runtime\tcc\tcc.exe', 'C:\Users\BULUT~1.HAR\AppData\Local\Temp\tmpsvvei_go\cuda_utils.c', '-O3', '-shared', '-Wno-psabi', '-o', 'C:\Users\BULUT~1.HAR\AppData\Local\Temp\tmpsvvei_go\cuda_utils.cp310-win_amd64.pyd', '-fPIC', '-lcuda', '-lpython3', '-LL:\webui_forge_cu121_torch231\system\python\Lib\site-packages\triton\backends\nvidia\lib', '-LC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\lib\x64', '-IL:\webui_forge_cu121_torch231\system\python\Lib\site-packages\triton\backends\nvidia\include', '-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\include', '-IC:\Users\BULUT~1.HAR\AppData\Local\Temp\tmpsvvei_go', '-IL:\webui_forge_cu121_torch231\system\python\Include']
Traceback (most recent call last):
File "L:\webui_forge_cu121_torch231\system\python\lib\site-packages\diffusers\utils\import_utils.py", line 853, in get_module
return importlib.import_module("." + module_name, self.name)
File "importlib_init.py", line 126, in import_module
File "
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "L:\webui_forge_cu121_torch231\system\python\lib\site-packages\diffusers\utils\import_utils.py", line 853, in get_module
return importlib.import_module("." + module_name, self.name)
File "importlib_init.py", line 126, in import_module
File "
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "L:\webui_forge_cu121_torch231\webui\launch.py", line 54, in
Your webui/python install don't seem setup correctly. Did you setup with a VENV or are you using your global python? Going by the folder name you used the pre-made release. Did you update the WebUI? Does it work if you uninstall triton?
Your webui/python install don't seem setup correctly. Did you setup with a VENV or are you using your global python? Going by the folder name you used the pre-made release. Did you update the WebUI? Does it work if you uninstall triton?
Hello, thanks for your quick response. It's not a fresh installation; it was installed a long time ago. I made all updates and uninstalled Triton, but no luck, still the same.
Have you always had a ~ in your windows username? That's the only thing that really jumps out at me and it could be interfering with some path resolution code.
You could also try running webui-user.bat from within the forge directory and letting it generate a venv for you.
Have you always had a
~in your windows username? That's the only thing that really jumps out at me and it could be interfering with some path resolution code.You could also try running
webui-user.batfrom within the forge directory and letting it generate a venv for you.
Actually, there is no such character in my computer name, it added it to the code itself.
As you said, I ran webui-user.bat from the source folder, but it gave an error again and the problem was not fixed.
Its kind of odd that its trying to access the CUDA toolkit even after you also uninstalled triton. My best suggestion is to backup your models/extensions and reinstall Forge directly using git. Additionally, if you have a RTX 2000+ gpu, I would also just install pytorch 2.7+cu128 so you get pytorch attention + a lot of the xformers speed ups. There's not a huge reason to use Xformers + Triton on Forge anymore if your gpu supports the newer torch versions.
Additionally, for Forge, many of your commandline args are irrelevant now. The only one you have that really does anything is --cuda-malloc.
I'll include the instructions:
- Backup models/extensions/image/etc
- (Optional) Use latest version of Python 3.11
- Delete the forge folder
- Open a new powershell prompt in the directory where you want to reinstall forge (can usually be accessed via right-click menu in file explorer on Win 10/11. May default to command prompt, in which case just enter
powershellto switch) - Run the command
git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.git - Edit
webui-user.batto include your wanted commandline args. In this case I suggest the following (assuming rtx 2000+ gpu):set COMMANDLINE_ARGS=--cuda-malloc --cuda-stream --pin-shared-memory - Start
webui-user.bat. It will create the venv in the current directory. PressCtrl+Cwhen you see "installing requirements" to stop the application - With your powershell open in the root directory of Forge (where webui-user is), enter these commands:
This will install torch then print the torch version if successfulvenv\Scripts\activate python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())" - Re-add models/extensions
- Start Forge by running
webui-user.bat
If you hit any snags along the way we'll circle back and take another look.