[bug]: many models fail to import when coming from Auto1111
Is there an existing issue for this problem?
- [X] I have searched the existing issues
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
RTX 3060
GPU VRAM
12GB
Version number
5.0.0
Browser
Edge 128.0.2739.79 (Official build) (64-bit)
Python dependencies
No response
What happened
I went to the model manager, and scanned my model folder from my Auto1111 install, and there are many models that fail to import in Invoke. Here are some of them:
Unable to determine model type:
ViT-L-14-BEST-smooth-GmP-TE-only-HF-format.safetensors (text encoder, clip-l fine-tune)
ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors (text encoder, clip-l fine-tune)
t5xxl_fp8_e4m3fn.safetensors (text encoder, t5 fp8)
t5xxl_fp16.safetensors (text encoder, t5 fp16)
ControlNetHED.pth (ControlNet HED preprocessor)
clip_l.safetensors (text encoder, clip-l)
body_pose_model.pth (openpose controlnet model)
ip-adapter-plus-face_sd15.pth (ip-adapter plus face model for SD 1.5)
kohya_controllllite_xl_blur.safetensors
t2i-adapter_diffusers_xl_openpose.safetensors
kohya_controllllite_xl_depth.safetensors
t2i-adapter_xl_openpose.safetensors
kohya_controllllite_xl_canny.safetensors
bdsqlsz_controlllite_xl_tile_realistic.safetensors
ip-adapter-plus-face_sd15.pth
Unknown LoRA type:
fluxlora.safetensors (Flux LoRA, trained in Flux Gym)
SameFace_fix.safetensors (Flux LoRA)
Cannot determine base type:
sd3_medium_incl_clips.safetensors (sd3 medium, including clip)
sd3_medium_incl_clips_t5xxlfp8.safetensors (sd3 medium, including clip and t5)
ae.safetensors (flux VAE)
control-lora-openposeXL2-rank256.safetensors
thibaud_xl_openpose_256lora.safetensors
Unsupported model file extension .bin:
ip-adapter-faceid-plusv2_sd15.bin (ip-adapter face ID plus v2 for SD 1.5)
ip-adapter-faceid-plusv2_sdxl.bin (ip-adapter face ID plus v2 for SDXL)
What do I do with these failed imports? Can I manually import them and specify the model type? Some are probably unsupported, or not yet supported, but others should be usable. Are alternative T5 quants not supported for Flux? Clip-L finetunes? Many SDXL ControlNets? How do I know what is and is not supported in Invoke?
What you expected to happen
Completed import.
How to reproduce the problem
Just tried to install these models in the model manager.
Additional context
No response
Discord username
No response
same
mee too! Doesn't want to import local models I already have
cannot add vae for Flux (ae.safetensors) cannot add T5 for Flux
Also, important! It SHOULD understand both .sft and .safetensors extensions
cannot add vae for Flux (ae.safetensors) cannot add T5 for Flux
Yeah, you have to add a Flux VAE and Flux T5 from the "starter models" tab in the model manager. I think the VAE is a diffusers version, and the T5 I think it also unique for Invoke (there are 2 versions available).
Initially installed the quantized flux model with the Invoke GUI. Then set up a second (dev) environment and tried to import the existing models (with in-place install). Same import issues:
-
cannot add vae for Flux Error:
InvalidModelConfigException: Cannot determine base type -
cannot add T5 for Flux (quantized) Error:
InvalidModelConfigException: invokeai/models/any/t5_encoder/t5_bnb_int8_quantized_encoder/text_encoder_2: no .safetensors files found
I have had the similar requirement and I too ran into this very same error. The issue lies within the T5EncoderCheckpointModel class _load_model function in flux.py. This function incorrectly appends the directory name (text_encoder_2 and tokenizer_2) to the model config path when searching for the encoder and tokenizer.
The problem is illustrated by this example:
Model config path: invokeai/models/any/t5_encoder/t5_bnb_int8_quantized_encoder/text_encoder_2
Actual model location: invokeai/models/any/t5_encoder/t5_bnb_int8_quantized_encoder/text_encoder_2 (Note: Models are directly in this directory)
Incorrectly searched path (by the function): invokeai/models/any/t5_encoder/t5_bnb_int8_quantized_encoder/text_encoder_2/text_encoder_2 (The directory name is appended twice)
The solution implemented was a workaround: To resolve this, I created a nested directory with the same name (e.g., text_encoder_2/text_encoder_2) within the existing text_encoder_2 directory and moved the model files into this nested directory. This effectively matched the function's erroneous path construction. The same process was repeated for the tokenizer (tokenizer_2).
It's Feb 2025 and t his is still a bug?
Hey, can't import fp16 T5 encoder - is it not supported by invoke AI?
I have the same error
[2025-04-22 10:28:03,842]::[ModelInstallService]::ERROR --> Model install error: /app/models/core/sd3.5_medium.safetensors
InvalidModelConfigException: Cannot determine base type
[2025-04-22 10:28:12,540]::[ModelInstallService]::ERROR --> Model install error: /app/models/any/t5_encoder/t5_base_encoder/text_encoder_2
InvalidModelConfigException: /app/models/any/t5_encoder/t5_base_encoder/text_encoder_2: no .safetensors files found