InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: can't detect safetensor files if moved even if downloaded through InvokeAI

Open phazei opened this issue 1 year ago • 7 comments

Is there an existing issue for this problem?

  • [X] I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

3090

GPU VRAM

24gb

Version number

5.2

Browser

Chrome

Python dependencies

No response

What happened

I installed the T5 encoder, moved the whole t5_encoder folder to another location so it could be used with other tools. Then I had Invoke scan the folder, it lists it, I click the plus, and it gives the error Failed, t5_bnb_int8_quantized_encoder: no .safetensors files found

What you expected to happen

I can understand it not supporting some files, but it downloaded these itself! I expect it to be able to scan and install the files wherever they are.

How to reproduce the problem

No response

Additional context

I really love the interface for InvokeAI, but of every tool out there, there is none that is pickier than it. Half the time it won't accept a file saying it can't determine what type of file it is. No other tool has that issue, they can all figure out what the files are. I'd rather it let me try the file and crash than refuse to load it to begin with. It's frustrating because I'd rather use this than Forge or Comfy.

Discord username

No response

phazei avatar Oct 25 '24 08:10 phazei

You could try removing the name from the sqlite database, the old entry maybe preventing it from installing.

regiellis avatar Oct 25 '24 21:10 regiellis

I tried it on a fresh install that never had the old entry when switching from stand alone to Stability Matrix.

I also tried with the config file named the default 'config' or 'model' or matching the folder name, a bunch of combinations. Doing that eventually got clip-vit-large-patch14 installed, but never worked for T5. And when I tried using the clip that was moved and installed that way, it failed to work so I deleted it and just had Invoke install it from it's own servers to get it working again.

phazei avatar Oct 27 '24 05:10 phazei

same issue i also tried changing hashing_algorithm: to a few different ones and no changes helped

tampadesignr avatar Nov 20 '24 15:11 tampadesignr

No idea what I'm doing wrong but I'm having the same experience. I've watched a ton of YouTube videos and none of them mention this problem but I've started from scratch 3 times now and keep running into this or similar issues. I'm also using Stability Matrix with shared Models, etc., so perhaps that's where to start looking. Flux is the problem, SDXL is working however.

UPDATE: Just tried another approach - I selected the ADD Model tab "URL or Local Path" and entered the path into which Invoke had downloaded the t5_base_encoder and checked "In-Pace Install". This seems to have worked. In my case the path was: F:\AI\Stability Matrix\Data\Packages\InvokeAI\invokeai-root\models\any\t5_encoder\t5_base_encoder

dcham23 avatar Dec 06 '24 02:12 dcham23

I have the same problem... not resolved...

Othello40 avatar Jan 15 '25 13:01 Othello40

Me too...

Iulipartenie avatar Feb 21 '25 13:02 Iulipartenie

same here, i tried generating config.json and model.safetensors.index.json , still not working. If this bug gets fixed , we don't need separate model files for invoke and comfyui, downloading multiple models can fill up ssd pretty quickly :)

To generate config.json

`import json
from transformers import T5Config

# Define the configuration with estimated properties
config = T5Config(
    architectures=["T5EncoderModel"],
    d_model=4096,  # Standard for T5-XXL
    num_layers=24,  # Encoder layers
    num_decoder_layers=24,  # Decoder layers
    num_heads=64,  # Multi-head attention
    d_ff=10240,  # Feedforward layer size
    vocab_size=32128,  # Common T5 vocabulary
    dropout_rate=0.1,
    layer_norm_epsilon=1e-6,
    relative_attention_num_buckets=32,
    relative_attention_max_distance=128,
    feed_forward_proj="gated-gelu",
    initializer_factor=1.0,
    is_encoder_decoder=True,
    eos_token_id=1,
    pad_token_id=0,
    tie_word_embeddings=False,
    torch_dtype="bfloat16",  # Adjust as needed
    _name_or_path="t5xxl_fp8_e4m3fn_scaled.safetensors",
    transformers_version="4.43.3",
    use_cache=True
)

# Save as JSON file
config_path = "config.json"
with open(config_path, "w") as f:
    json.dump(config.to_dict(), f, indent=2)

print(f"Config file saved to {config_path}")
`

To Generate model.safetensors.index.json

`import json
import os
from safetensors.torch import load_file

# Define model file(s)
model_files = ["t5xxl_fp8_e4m3fn_scaled.safetensors"]  # Update if multiple shards exist

# Load tensors from the first file
model_path = model_files[0]
model_data = load_file(model_path)

# Generate weight map
weight_map = {key: os.path.basename(model_path) for key in model_data.keys()}

# Get total size of model file(s)
total_size = sum(os.path.getsize(f) for f in model_files)

# Create index structure
index_data = {
    "metadata": {"total_size": total_size},
    "weight_map": weight_map
}

# Save to JSON
index_file = "model.safetensors.index.json"
with open(index_file, "w") as f:
    json.dump(index_data, f, indent=2)

print(f"Index file saved as {index_file}")`

I am unable to add the t5_base_encoder thru "Add Models" section but as you know when we install it through recommended model, it gets installed.

Image

as you can see here, this model is installed after you install flux models, I was unable to add this though "Add Models" , but in the below screenshot you see that it is installed 👍

Image

These might help fix this bug, also there is difficulty in adding models from civit ai , i tried to give it my comfyui models directory, but it is still unable to use some civitai sdxl models

greyrabbit2003 avatar Mar 09 '25 15:03 greyrabbit2003