InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: fail to load ckpt model after upgrading to v2.3.0

Open slavikshen opened this issue 2 years ago • 2 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

OS

macOS

GPU

mps

VRAM

No response

What happened?

Before the upgrade to v2.3.0, I can load my ckpt models downloaded from both huggingface and civitai. However, after the upgrade, none of them can be loaded.

  1. configs/stable-diffusion/v1-inference.yaml is missing
  2. after I download the v1-inference.yaml from hugginface, I get
TypeError: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config'

The complete stacktrace

Traceback (most recent call last):
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/CLI.py", line 158, in main
    main_loop(gen, opt)
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/CLI.py", line 217, in main_loop
    command, operation = do_command(command, gen, opt, completer)
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/CLI.py", line 480, in do_command
    import_model(path[1], gen, opt, completer)
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/CLI.py", line 594, in import_model
    if not _verify_load(model_name, gen):
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/CLI.py", line 674, in _verify_load
    if not gen.model_manager.get_model(model_name):
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 97, in get_model
    requested_model, width, height, hash = self._load_model(model_name)
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 316, in _load_model
    model, width, height, model_hash = self._load_ckpt_model(model_name, mconfig)
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 399, in _load_ckpt_model
    model = instantiate_from_config(omega_config.model)
  File "/Volumes/Work/InvokeAI/.venv/lib/python3.10/site-packages/ldm/util.py", line 92, in instantiate_from_config
    return get_obj_from_str(config['target'])(
TypeError: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config'

Screenshots

No response

Additional context

No response

Contact Details

No response

slavikshen avatar Feb 05 '23 15:02 slavikshen

Everything works in v2.2.5 branch

slavikshen avatar Feb 05 '23 16:02 slavikshen

Wait for the 2.3.0 release and then run the installer on a fresh directory. You’ll be able to load your old ckpt files without duplicating them.

lstein avatar Feb 08 '23 01:02 lstein

There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.

github-actions[bot] avatar Mar 12 '23 06:03 github-actions[bot]