text2image-gui
text2image-gui copied to clipboard
key error: state_dict
Got "key error: state_dict" with about a half of the models that I've downloaded.
This means that the model was merged with an incompatible script
[01-07-2023 21:49:30] [sd] >> Loading protogenV22AnimeOffi_22.ckpt-noVae from T:/IA/ModelsAI/CKPT/version1/protogenV22AnimeOffi_22.ckpt [01-07-2023 21:49:30] [sd] ** model protogenV22AnimeOffi_22.ckpt-noVae could not be loaded: 'state_dict' [01-07-2023 21:49:30] [sd] Traceback (most recent call last): [01-07-2023 21:49:30] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 68, in get_model [01-07-2023 21:49:30] [sd] requested_model, width, height, hash = self._load_model(model_name) [01-07-2023 21:49:30] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 209, in _load_model [01-07-2023 21:49:30] [sd] sd = pl_sd['state_dict'] [01-07-2023 21:49:30] [sd] KeyError: 'state_dict' [01-07-2023 21:49:30] [sd] ** restoring None [01-07-2023 21:49:30] [sd] ** "None" is not a known model name. Please check your models.yaml file [01-07-2023 21:49:30] [general] ExportLoop END [01-07-2023 21:49:30] [general] No images generated. [01-07-2023 21:49:30] [general] SetState(Standby)
Is this the same error? It has solution? Thanks!
[01-07-2023 21:49:30] [sd] >> Loading protogenV22AnimeOffi_22.ckpt-noVae from T:/IA/ModelsAI/CKPT/version1/protogenV22AnimeOffi_22.ckpt [01-07-2023 21:49:30] [sd] ** model protogenV22AnimeOffi_22.ckpt-noVae could not be loaded: 'state_dict' [01-07-2023 21:49:30] [sd] Traceback (most recent call last): [01-07-2023 21:49:30] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 68, in get_model [01-07-2023 21:49:30] [sd] requested_model, width, height, hash = self._load_model(model_name) [01-07-2023 21:49:30] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 209, in _load_model [01-07-2023 21:49:30] [sd] sd = pl_sd['state_dict'] [01-07-2023 21:49:30] [sd] KeyError: 'state_dict' [01-07-2023 21:49:30] [sd] ** restoring None [01-07-2023 21:49:30] [sd] ** "None" is not a known model name. Please check your models.yaml file [01-07-2023 21:49:30] [general] ExportLoop END [01-07-2023 21:49:30] [general] No images generated. [01-07-2023 21:49:30] [general] SetState(Standby)
Is this the same error? It has solution? Thanks!
Download safetensors file if available and convert it to ckpt
I tried using the same model and a newer version converted with Safe-and-Stable-Ckpt2Safetensors-Conversion-Tool-GUI but it gave me the same error.
[01-11-2023 02:03:19] [sd] >> Loading protogenX34Photoreal_1.ckpt-noVae from T:/IA/ModelsAI/CKPT/version1/protogenX34Photoreal_1.ckpt [01-11-2023 02:03:19] [sd] ** model protogenX34Photoreal_1.ckpt-noVae could not be loaded: 'state_dict' [01-11-2023 02:03:19] [general] Canceling. Reason: Process has errored: Failed to load model. The model appears to be incompatible. - Implementation: InvokeAi - Force Kill: False [01-11-2023 02:03:19] [general] Killing current task's processes. [01-11-2023 02:03:19] [general] Canceled: Process has errored: Failed to load model. The model appears to be incompatible. [01-11-2023 02:03:19] [sd] Traceback (most recent call last): [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 68, in get_model [01-11-2023 02:03:19] [sd] requested_model, width, height, hash = self._load_model(model_name) [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 209, in _load_model [01-11-2023 02:03:19] [sd] sd = pl_sd['state_dict'] [01-11-2023 02:03:19] [sd] KeyError: 'state_dict' [01-11-2023 02:03:19] [sd] ** restoring None [01-11-2023 02:03:19] [sd] ** "None" is not a known model name. Please check your models.yaml file [01-11-2023 02:03:19] [sd] >> Loading protogenX34Photoreal_1.ckpt-noVae from T:/IA/ModelsAI/CKPT/version1/protogenX34Photoreal_1.ckpt [01-11-2023 02:03:19] [sd] ** model protogenX34Photoreal_1.ckpt-noVae could not be loaded: 'state_dict' [01-11-2023 02:03:19] [sd] Traceback (most recent call last): [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 68, in get_model [01-11-2023 02:03:19] [sd] requested_model, width, height, hash = self._load_model(model_name) [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 209, in _load_model [01-11-2023 02:03:19] [sd] sd = pl_sd['state_dict'] [01-11-2023 02:03:19] [sd] KeyError: 'state_dict' [01-11-2023 02:03:19] [sd] ** restoring None [01-11-2023 02:03:19] [sd] ** "None" is not a known model name. Please check your models.yaml file [01-11-2023 02:03:19] [general] ExportLoop END [01-11-2023 02:03:19] [general] No images generated. [01-11-2023 02:03:19] [general] SetState(Standby)
I tried using the same model and a newer version converted with Safe-and-Stable-Ckpt2Safetensors-Conversion-Tool-GUI but it gave me the same error.
[01-11-2023 02:03:19] [sd] >> Loading protogenX34Photoreal_1.ckpt-noVae from T:/IA/ModelsAI/CKPT/version1/protogenX34Photoreal_1.ckpt [01-11-2023 02:03:19] [sd] ** model protogenX34Photoreal_1.ckpt-noVae could not be loaded: 'state_dict' [01-11-2023 02:03:19] [general] Canceling. Reason: Process has errored: Failed to load model. The model appears to be incompatible. - Implementation: InvokeAi - Force Kill: False [01-11-2023 02:03:19] [general] Killing current task's processes. [01-11-2023 02:03:19] [general] Canceled: Process has errored: Failed to load model. The model appears to be incompatible. [01-11-2023 02:03:19] [sd] Traceback (most recent call last): [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 68, in get_model [01-11-2023 02:03:19] [sd] requested_model, width, height, hash = self._load_model(model_name) [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 209, in _load_model [01-11-2023 02:03:19] [sd] sd = pl_sd['state_dict'] [01-11-2023 02:03:19] [sd] KeyError: 'state_dict' [01-11-2023 02:03:19] [sd] ** restoring None [01-11-2023 02:03:19] [sd] ** "None" is not a known model name. Please check your models.yaml file [01-11-2023 02:03:19] [sd] >> Loading protogenX34Photoreal_1.ckpt-noVae from T:/IA/ModelsAI/CKPT/version1/protogenX34Photoreal_1.ckpt [01-11-2023 02:03:19] [sd] ** model protogenX34Photoreal_1.ckpt-noVae could not be loaded: 'state_dict' [01-11-2023 02:03:19] [sd] Traceback (most recent call last): [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 68, in get_model [01-11-2023 02:03:19] [sd] requested_model, width, height, hash = self._load_model(model_name) [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 209, in _load_model [01-11-2023 02:03:19] [sd] sd = pl_sd['state_dict'] [01-11-2023 02:03:19] [sd] KeyError: 'state_dict' [01-11-2023 02:03:19] [sd] ** restoring None [01-11-2023 02:03:19] [sd] ** "None" is not a known model name. Please check your models.yaml file [01-11-2023 02:03:19] [general] ExportLoop END [01-11-2023 02:03:19] [general] No images generated. [01-11-2023 02:03:19] [general] SetState(Standby)
Use NMKD SD GUI converter to convert safetensors to ckpt, as I have said before.
I tried using the same model and a newer version converted with Safe-and-Stable-Ckpt2Safetensors-Conversion-Tool-GUI but it gave me the same error. [01-11-2023 02:03:19] [sd] >> Loading protogenX34Photoreal_1.ckpt-noVae from T:/IA/ModelsAI/CKPT/version1/protogenX34Photoreal_1.ckpt [01-11-2023 02:03:19] [sd] ** model protogenX34Photoreal_1.ckpt-noVae could not be loaded: 'state_dict' [01-11-2023 02:03:19] [general] Canceling. Reason: Process has errored: Failed to load model. The model appears to be incompatible. - Implementation: InvokeAi - Force Kill: False [01-11-2023 02:03:19] [general] Killing current task's processes. [01-11-2023 02:03:19] [general] Canceled: Process has errored: Failed to load model. The model appears to be incompatible. [01-11-2023 02:03:19] [sd] Traceback (most recent call last): [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 68, in get_model [01-11-2023 02:03:19] [sd] requested_model, width, height, hash = self._load_model(model_name) [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 209, in _load_model [01-11-2023 02:03:19] [sd] sd = pl_sd['state_dict'] [01-11-2023 02:03:19] [sd] KeyError: 'state_dict' [01-11-2023 02:03:19] [sd] ** restoring None [01-11-2023 02:03:19] [sd] ** "None" is not a known model name. Please check your models.yaml file [01-11-2023 02:03:19] [sd] >> Loading protogenX34Photoreal_1.ckpt-noVae from T:/IA/ModelsAI/CKPT/version1/protogenX34Photoreal_1.ckpt [01-11-2023 02:03:19] [sd] ** model protogenX34Photoreal_1.ckpt-noVae could not be loaded: 'state_dict' [01-11-2023 02:03:19] [sd] Traceback (most recent call last): [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 68, in get_model [01-11-2023 02:03:19] [sd] requested_model, width, height, hash = self._load_model(model_name) [01-11-2023 02:03:19] [sd] File "t:\ia\sd-gui\data\repo\ldm\invoke\model_cache.py", line 209, in _load_model [01-11-2023 02:03:19] [sd] sd = pl_sd['state_dict'] [01-11-2023 02:03:19] [sd] KeyError: 'state_dict' [01-11-2023 02:03:19] [sd] ** restoring None [01-11-2023 02:03:19] [sd] ** "None" is not a known model name. Please check your models.yaml file [01-11-2023 02:03:19] [general] ExportLoop END [01-11-2023 02:03:19] [general] No images generated. [01-11-2023 02:03:19] [general] SetState(Standby)
Use NMKD SD GUI converter to convert safetensors to ckpt, as I have said before.
I hadn't seen that option within the GUI. I converted it and it worked without problems. Thank you.
"Converting model 'elldrethSLucidMix_v10.safetensors' - This could take a few minutes... Failed to convert model."
"Converting model 'elldrethSLucidMix_v10.safetensors' - This could take a few minutes... Failed to convert model."
Send logs
n00mkrad, change the methods of loading the model in StableDiffusionGui\Data\repo\ldm\invoke\model_cache.py and StableDiffusionGui\Data\repo\optimizedSD\optimized_txt2img_loop.py like this https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/10aca1ca3e81e69e08f556a500c3dc603451429b I have tested. This allowed me to solve the issue with loading many third-party models.
model_cache.py:
class ModelCache(object):
...
def transform_checkpoint_dict_key(self, k):
chckpoint_dict_replacements = {
'cond_stage_model.transformer.embeddings.': 'cond_stage_model.transformer.text_model.embeddings.',
'cond_stage_model.transformer.encoder.': 'cond_stage_model.transformer.text_model.encoder.',
'cond_stage_model.transformer.final_layer_norm.': 'cond_stage_model.transformer.text_model.final_layer_norm.',
}
for text, replacement in chckpoint_dict_replacements.items():
if k.startswith(text):
k = replacement + k[len(text):]
return k
def get_state_dict_from_checkpoint(self, pl_sd):
pl_sd = pl_sd.pop("state_dict", pl_sd)
pl_sd.pop("state_dict", None)
sd = {}
for k, v in pl_sd.items():
new_key = self.transform_checkpoint_dict_key(k)
if new_key is not None:
sd[new_key] = v
pl_sd.clear()
pl_sd.update(sd)
return pl_sd
...
def _load_model(self, model_name:str):
...
del weight_bytes
# sd = pl_sd['state_dict']
sd = self.get_state_dict_from_checkpoint(pl_sd)
model = instantiate_from_config(c.model)
...
optimized_txt2img_loop.py:
...
chckpoint_dict_replacements = {
'cond_stage_model.transformer.embeddings.': 'cond_stage_model.transformer.text_model.embeddings.',
'cond_stage_model.transformer.encoder.': 'cond_stage_model.transformer.text_model.encoder.',
'cond_stage_model.transformer.final_layer_norm.': 'cond_stage_model.transformer.text_model.final_layer_norm.',
}
def transform_checkpoint_dict_key(k):
for text, replacement in chckpoint_dict_replacements.items():
if k.startswith(text):
k = replacement + k[len(text):]
return k
def get_state_dict_from_checkpoint(pl_sd):
pl_sd = pl_sd.pop("state_dict", pl_sd)
pl_sd.pop("state_dict", None)
sd = {}
for k, v in pl_sd.items():
new_key = transform_checkpoint_dict_key(k)
if new_key is not None:
sd[new_key] = v
pl_sd.clear()
pl_sd.update(sd)
return pl_sd
def load_model_from_config(ckpt, verbose=False):
print(f"Loading model from {ckpt}")
pl_sd = torch.load(ckpt, map_location="cpu")
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
# sd = pl_sd["state_dict"]
sd = get_state_dict_from_checkpoint(pl_sd)
return sd
...
@iillii thanks a lot, that seems to work. Will include the changes in the next update.