InvokeAI
InvokeAI copied to clipboard
[bug]: using prmj_v1 model (2.1 model)
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Windows
GPU
cuda
VRAM
8
What happened?
when trying to load the model i get this. i added the config file that comes with the model. i tried ckpt and safetensors as well, no difference. i also tried to convert it which throws an error as well (not listed here). this is with the latest 2.3.0 version.
model is here: https://civitai.com/models/6465/prmj
Loading prmj_v1 from C:/dev/m/prmj_v1.safetensors | Forcing garbage collection prior to loading new model ** model prmj_v1 could not be loaded: LatentDiffusion.init() missing 1 required positional argument: 'personalization_config' Traceback (most recent call last): File "C:\dev\Invoke.venv\lib\site-packages\ldm\generate.py", line 889, in set_model model_data = cache.get_model(model_name) File "C:\dev\Invoke.venv\lib\site-packages\ldm\invoke\model_manager.py", line 106, in get_model requested_model, width, height, hash = self._load_model(model_name) File "C:\dev\Invoke.venv\lib\site-packages\ldm\invoke\model_manager.py", line 335, in _load_model model, width, height, model_hash = self._load_ckpt_model( File "C:\dev\Invoke.venv\lib\site-packages\ldm\invoke\model_manager.py", line 428, in _load_ckpt_model model = instantiate_from_config(omega_config.model) File "C:\dev\Invoke.venv\lib\site-packages\ldm\util.py", line 92, in instantiate_from_config return get_obj_from_str(config['target'])( TypeError: LatentDiffusion.init() missing 1 required positional argument: 'personalization_config'
Screenshots
No response
Additional context
No response
Contact Details
No response
I tried a couple of other 2.1 Models found on civitai.com , none of them work.
Does the stock/default 2.1 model work in your current setup?
Yes
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.
Due to inactivity, this issue was automatically closed. If you are still experiencing the issue, please recreate the issue.