InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: dev/diffusers branch fails to load embeddings

Open brucethemoose opened this issue 3 years ago • 2 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

OS

Linux

GPU

cuda

VRAM

6GB

What happened?

As the title says, it fails to load the embedding found here: https://civitai.com/models/2032/empire-style

~/AI/InvokeAI dev/diffusers 40s
invokeai ❯ invoke.py --root_dir /home/alpha/Storage/AIModels/InvokeAI --autoconvert /home/alpha/Storage/AIModels/autoconvert/ --web
* Initializing, be patient...
>> Initialization file /home/alpha/Storage/AIModels/InvokeAI/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI runtime directory is "/home/alpha/Storage/AIModels/InvokeAI"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> Current VRAM usage:  0.00G
>> Loading diffusers model from stabilityai/stable-diffusion-2-1
  | Using faster float16 precision
Fetching 12 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 37560.93it/s]
  | Default image dimensions = 768 x 768
>> Model loaded in 3.12s
>> Max VRAM used to load the model: 2.60G
>> Current VRAM usage:2.60G
>> Loading textual inversion from Place Textual Inversion embeddings here.txt
>> Not a recognized embedding file: /home/alpha/Storage/AIModels/InvokeAI/embeddings/Place Textual Inversion embeddings here.txt
>> Failed to load embedding located at /home/alpha/Storage/AIModels/InvokeAI/embeddings/Place Textual Inversion embeddings here.txt. Unsupported file.
>> Loading textual inversion from Style-Empire.pt
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/alpha/AI/InvokeAI/invokeai/bin/invoke.py:7 in <module>                                     │
│                                                                                                  │
│   4 __import__('pkg_resources').require('InvokeAI==2.2.5')                                       │
│   5 __file__ = '/home/alpha/AI/InvokeAI/scripts/invoke.py'                                       │
│   6 with open(__file__) as f:                                                                    │
│ ❱ 7 │   exec(compile(f.read(), __file__, 'exec'))                                                │
│   8                                                                                              │
│                                                                                                  │
│ /home/alpha/AI/InvokeAI/scripts/invoke.py:9 in <module>                                          │
│                                                                                                  │
│    6 │   os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"                                         │
│    7                                                                                             │
│    8 import ldm.invoke.CLI                                                                       │
│ ❱  9 ldm.invoke.CLI.main()                                                                       │
│   10                                                                                             │
│   11                                                                                             │
│                                                                                                  │
│ /home/alpha/AI/InvokeAI/ldm/invoke/CLI.py:125 in main                                            │
│                                                                                                  │
│    122 │                                                                                         │
│    123 │   # preload the model                                                                   │
│    124 │   try:                                                                                  │
│ ❱  125 │   │   gen.load_model()                                                                  │
│    126 │   except AssertionError:                                                                │
│    127 │   │   emergency_model_reconfigure(opt)                                                  │
│    128 │   │   sys.exit(-1)                                                                      │
│                                                                                                  │
│ /home/alpha/AI/InvokeAI/ldm/generate.py:814 in load_model                                        │
│                                                                                                  │
│    811 │   │   '''                                                                               │
│    812 │   │   preload model identified in self.model_name                                       │
│    813 │   │   '''                                                                               │
│ ❱  814 │   │   self.set_model(self.model_name)                                                   │
│    815 │                                                                                         │
│    816 │   def set_model(self,model_name):                                                       │
│    817 │   │   """                                                                               │
│                                                                                                  │
│ /home/alpha/AI/InvokeAI/ldm/generate.py:862 in set_model                                         │
│                                                                                                  │
│    859 │   │   │   │   │   if verbose:                                                           │
│    860 │   │   │   │   │   │   print(f'>> Loading textual inversion from {name}')                │
│    861 │   │   │   │   │   ti_path = os.path.join(root, name)                                    │
│ ❱  862 │   │   │   │   │   self.model.textual_inversion_manager.load_textual_inversion(ti_path)  │
│    863 │   │   │   print(f'>> Textual inversions available: {", ".join(self.model.textual_inver  │
│    864 │   │                                                                                     │
│    865 │   │   self.model_name = model_name                                                      │
│                                                                                                  │
│ /home/alpha/AI/InvokeAI/ldm/modules/textual_inversion_manager.py:62 in load_textual_inversion    │
│                                                                                                  │
│    59 │   │                                                                                      │
│    60 │   │   embedding_info = self._parse_embedding(ckpt_path)                                  │
│    61 │   │   if embedding_info:                                                                 │
│ ❱  62 │   │   │   self._add_textual_inversion(embedding_info['name'], embedding_info['embeddin   │
│    63 │   │   else:                                                                              │
│    64 │   │   │   print(f'>> Failed to load embedding located at {ckpt_path}. Unsupported file   │
│    65                                                                                            │
│                                                                                                  │
│ /home/alpha/AI/InvokeAI/ldm/modules/textual_inversion_manager.py:87 in _add_textual_inversion    │
│                                                                                                  │
│    84 │   │   pad_token_strings = [trigger_str + "-!pad-" + str(pad_index) for pad_index in ra   │
│    85 │   │                                                                                      │
│    86 │   │   try:                                                                               │
│ ❱  87 │   │   │   trigger_token_id = self._get_or_create_token_id_and_assign_embedding(trigger   │
│    88 │   │   │   # todo: batched UI for faster loading when vector length >2                    │
│    89 │   │   │   pad_token_ids = [self._get_or_create_token_id_and_assign_embedding(pad_token   │
│    90 │   │   │   │   │   │   │    for (i, pad_token_str) in enumerate(pad_token_strings)]       │
│                                                                                                  │
│ /home/alpha/AI/InvokeAI/ldm/modules/textual_inversion_manager.py:159 in                          │
│ _get_or_create_token_id_and_assign_embedding                                                     │
│                                                                                                  │
│   156 │   │   token_id = self.tokenizer.convert_tokens_to_ids(token_str)                         │
│   157 │   │   if token_id == self.tokenizer.unk_token_id:                                        │
│   158 │   │   │   raise RuntimeError(f"Unable to find token id for token '{token_str}'")         │
│ ❱ 159 │   │   self.text_encoder.get_input_embeddings().weight.data[token_id] = embedding         │
│   160 │   │                                                                                      │
│   161 │   │   return token_id                                                                    │
│   162                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: The expanded size of the tensor (1024) must match the existing size (768) at non-singleton dimension 0.  Target sizes: [1024].  Tensor sizes: [768]

I am on 132c960

Seems to happen to any other embedding I can find, unless I am extremely unlucky.

Screenshots

No response

Additional context

Might be related to https://github.com/invoke-ai/InvokeAI/issues/1829

Contact Details

brucethemoose in discord

brucethemoose avatar Jan 02 '23 14:01 brucethemoose

The problem is that an embedding trained on a stable diffusion 1.5 model won't load onto stable diffusion 2.1. I think it's because the image dimensions are incompatible. I'll see about putting in a check for that.

lstein avatar Jan 02 '23 23:01 lstein

@lstein if you fixed this from 2.2.5 to 2.3.0RC, its actually causing new issues. Unsure whether to add this as Feat Req or Bug. But loading SD2.1 checkpoint also loaded all the SD1.5 embeddings instead of only the SD2 compatible embeddings.

Style-Empire has been loading no issues for me in 2.2.5 and also in 2.3.0RC6

ignore the unsupported file errors. I'm assuming those ignoring other filetypes will be coded in sometime in the future.

>> Offloading stable-diffusion-1.5 to CPU
>> Loading diffusers model from stabilityai/stable-diffusion-2-1
  | Using faster float16 precision
Fetching 12 files: 100%|████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 23696.63it/s]
  | Default image dimensions = 768 x 768
>> Model loaded in 8.50s
>> Max VRAM used to load the model: 2.60G
>> Current VRAM usage:2.60G
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\+trigger words.txt
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\+trigger words.txt. Unsupported file.
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\advntr.preview.png
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\advntr.preview.png. Unsupported file.
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\AmandaComaPT-2150.preview.png
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\AmandaComaPT-2150.preview.png. Unsupported file.
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\AmandaComaPT-4850.preview.png
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\AmandaComaPT-4850.preview.png. Unsupported file.
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\AmandaComaPT-6150.preview.png
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\AmandaComaPT-6150.preview.png. Unsupported file.
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\AmandaComaPT-8700.preview.png
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\AmandaComaPT-8700.preview.png. Unsupported file.
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\olfn.preview.png
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\olfn.preview.png. Unsupported file.
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\Style-Empire.preview.png
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\Style-Empire.preview.png. Unsupported file.
>> Not a recognized embedding file: C:\Users\warre\AI\WebUI - InvokeAI\embeddings\Style-Princess.preview.png
>> Failed to load embedding located at C:\Users\warre\AI\WebUI - InvokeAI\embeddings\Style-Princess.preview.png. Unsupported file.
>> Textual inversions available: advntr-2300, AmandaComaPT-2150, AmandaComaPT-4850, AmandaComaPT-6150, AmandaComaPT-8700, testTurner, InkPunk768-2500, mdjrny-ppc-150, olfn-6250, PlanIt, protogemb2, Style_Empire, Style-Princess
>> Setting Sampler to k_lms (LMSDiscreteScheduler)

ShaguarWKL avatar Feb 09 '23 09:02 ShaguarWKL

There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.

github-actions[bot] avatar Mar 13 '23 06:03 github-actions[bot]