[bug]: merged inpainting models not loading
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Windows
GPU
cuda
VRAM
24
What happened?
hello , im pulling my hair trying to install custom inpainting models such as anythingv3 inpaint, or for example a merged protgen3,4 checkpoint addsum with 1,5 inpaints,,,
they work perfectly fine on auto1111, but in invoke it gives me this error
** model protogen-inpainting could not be loaded: Error(s) in loading state_dict for LatentInpaintDiffusion: size mismatch for model_ema.diffusion_modelinput_blocks00weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 9, 3, 3]). Traceback (most recent call last): File "C:\Users\noname\invokeai\.venv\lib\site-packages\ldm\invoke\model_cache.py", line 81, in get_model requested_model, width, height, hash = self._load_model(model_name) File "C:\Users\noname\invokeai\.venv\lib\site-packages\ldm\invoke\model_cache.py", line 249, in _load_model model.load_state_dict(sd, strict=False) File "C:\Users\noname\invokeai\.venv\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for LatentInpaintDiffusion: size mismatch for model_ema.diffusion_modelinput_blocks00weight: copying a param with shape torch.Size([320, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 9, 3, 3])
model is correctly named -inpainting.ckpt
model is pointing to the configs\stable-diffusion\v1-inpainting-inference.yaml
drembooth 1,5 retrained model merged with 1,5 inpaint model works
but protogen or anythingv3 with same method doesnt, so im really confused
link of model to be able to reproduce: https://civitai.com/models/3128/anything-v3-inpainting
Screenshots
No response
Additional context
No response
Contact Details
No response
I was getting the same error with a custom inpainting model. I made progress on it by importing it using the command line instead of the UI like I had been. However now I'm getting this error:
model could not be loaded: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config' Traceback (most recent call last): File "/home/chris/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/model_cache.py", line 81, in get_model requested_model, width, height, hash = self._load_model(model_name) File "/home/chris/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/model_cache.py", line 248, in _load_model model = instantiate_from_config(omega_config.model) File "/home/chris/invokeai/.venv/lib/python3.10/site-packages/ldm/util.py", line 90, in instantiate_from_config return get_obj_from_str(config['target'])( File "/home/chris/invokeai/.venv/lib/python3.10/site-packages/ldm/models/diffusion/ddpm.py", line 2219, in __init__ super().__init__(*args, **kwargs) TypeError: LatentDiffusion.__init__() missing 1 required positional argument: 'personalization_config'
I was able to work around this issue by using the convert_model! command to switch to diffusers then importing (importing happened automatically after converting).
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.
Actually, I figured this out. The inpainting model's configuration file was not set in the model manager. Pointing the config file setting to the config file in my local filesystem was able to load the model.
Actually, I figured this out. The inpainting model's configuration file was not set in the model manager. Pointing the config file setting to the config file in my local filesystem was able to load the model.
hi what do you mean by pointing the config file setting to the config file in my local file system... am having issues with all my inpainting models not working but normal models works just fine. also as soon as i load invoke.bat I'm greeted by this message: [2023-08-05 03:55:33,304]::[uvicorn.error]::INFO --> Started server process [5884] [2023-08-05 03:55:33,304]::[uvicorn.error]::INFO --> Waiting for application startup. [2023-08-05 03:55:33,304]::[InvokeAI]::INFO --> InvokeAI version 3.0.1post3 [2023-08-05 03:55:33,305]::[InvokeAI]::INFO --> Root directory = C:\Users\TCS\invokeai [2023-08-05 03:55:33,310]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 3060 [2023-08-05 03:55:33,327]::[InvokeAI]::INFO --> Scanning C:\Users\TCS\invokeai\models for new models [2023-08-05 03:55:33,544]::[InvokeAI]::INFO --> Scanned 5 files and directories, imported 0 models [2023-08-05 03:55:33,554]::[InvokeAI]::INFO --> Model manager service initialized [2023-08-05 03:55:33,559]::[uvicorn.error]::INFO --> Application startup complete. [2023-08-05 03:55:33,559]::[uvicorn.error]::INFO --> Uvicorn running on http: (Press CTRL+C to quit) [2023-08-05 03:55:36,054]::[uvicorn.access]::INFO
there's already "uvicorn.error" showing which i didn't in the previous version. I deleted everything and did a clean install but no luck. sorry for the long reply but there's no help out here in the invokeai ecosysystem.
@egoegoegoegoego in the inpainting model's settings under Config* change it to point to C:/whereyouhaveinvokeaiinstalled/configs/stable-diffusion/v1-inpainting-inference.yaml