“Error loading script: main.py...No module named 'torchvision.transforms.functional_tensor”?
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-669-gdfdcbab6
Commit hash: dfdcbab685e57677014f05a3309b48cc87383167
Launching Web UI with arguments: --forge-ref-a1111-home C:/Generative_AI/stable-diffusion-webui --controlnet-dir C:/Generative_AI/stable-diffusion-webui/models/ControlNet --embeddings-dir C:/Generative_AI/stable-diffusion-webui/embeddings --hypernetwork-dir C:/Generative_AI/stable-diffusion-webui/models/hypernetworks --lora-dir C:/Generative_AI/stable-diffusion-webui/models/Lora --ckpt-dir 'C:\Generative_AI\stable-diffusion-webui\models\Stable-diffusion' --vae-dir 'C:\Generative_AI\stable-diffusion-webui\models\VAE' --controlnet-preprocessor-models-dir 'C:\Generative_AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads'
Total VRAM 8188 MB, total RAM 31969 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
C:\Generative_AI\webui_forge\system\python\lib\site-packages\transformers\utils\hub.py:128: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: C:\Generative_AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
*** Error loading script: main.py
Traceback (most recent call last):
File "C:\Generative_AI\webui_forge\webui\modules\scripts.py", line 525, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Generative_AI\webui_forge\webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "
dirname: C:\Generative_AI\webui_forge\webui\localizations localizations: {'zh_CN': 'C:\Generative_AI\webui_forge\webui\extensions\stable-diffusion-webui-localization-zh_CN\localizations\zh_CN.json'} sd-webui-prompt-all-in-one background API service started successfully. 2025-09-10 23:43:26,059 - ControlNet - INFO - ControlNet UI callback registered. Model selected: {'checkpoint_info': {'filename': 'C:\Generative_AI\stable-diffusion-webui\models\Stable-diffusion\Flux\Flux.1-Dev-Q4_K_v30.gguf', 'hash': '2f1f9398'}, 'additional_modules': ['C:\Generative_AI\stable-diffusion-webui\models\VAE\ae.safetensors', 'C:\Generative_AI\stable-diffusion-webui\models\VAE\clip_l.safetensors', 'C:\Generative_AI\stable-diffusion-webui\models\VAE\t5xxl_fp8_e4m3fn.safetensors'], 'unet_storage_dtype': None} Using online LoRAs in FP16: False Running on local URL: http://127.0.0.1:7861
To create a public link, set share=True in launch().
Startup time: 29.8s (prepare environment: 2.5s, launcher: 0.4s, import torch: 6.0s, initialize shared: 0.1s, other imports: 0.3s, load scripts: 1.6s, create ui: 1.7s, gradio launch: 17.0s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 87.49% GPU memory (7163.00 MB) to load weights, and use 12.51% GPU memory (1024.00 MB) to do matrix computation.
The problem’s coming from one of your extensions — it’s outdated or doesn’t play nice with the Torch version you’re running. Easiest way to track it down is to turn your extensions off one by one and watch the CMD logs to see when the error disappears.
Or try this here’s how you can fix it:
- Option 1 – Update the extension Check if (openpose-editor) or (basicsr) has a newer version. Most of them already fixed this by switching the import line to:
from torchvision.transforms.functional import rgb_to_grayscale
- Option 2 – Fix it yourself If there’s no update, just edit the file directly Open with (Notepad++ or Notepad). Go to:
C:\Generative_AI\webui_forge\system\python\lib\site-packages\basicsr\data\degradations.py
Find this line:
from torchvision.transforms.functional_tensor import rgb_to_grayscale
Change it to:
from torchvision.transforms.functional import rgb_to_grayscale
Save it, restart WebUI, and you should be good.
- Option 3 – Downgrade torchvision Worst case, you can roll back torchvision to an older version:
pip install torchvision==0.14.1
But since you’re on (PyTorch 2.3.1+cu121), I’d only do this as a last resort — mixing versions can break CUDA.