No longer works.
I've been using this script daily for months, but as of 31 Aug 2023 it now breaks Automatic1111 and prevents any image generation from happening.
If this script is in the scripts folder, it is now no longer possible to generate any images without getting an error about the number of arguments being passed to the hijacked_get_learned_conditioning function in the script.
Removing the script and restarting Automatic1111 returns everything to normal but of course the functionality that this script offered is no longer available.
I hope this can be resolved as I will severely miss the functionality that this script added.
Here's how to fix it: Edit the script to add , *args to the all the *_get_learned_conditioning commands, like def hijacked_get_learned_conditioning(model, prompts, steps, *args): on line 7 and return real_get_learned_conditioning(model, switched_prompts, steps, *args) on line 40. There might be some unforeseen issues but it appears to be working. {fire|ice} looks the same as {fire@3|ice@3} and {fire@3|ice@20} looks different, so it seems to be parsing it correctly for basic txt2img.
I tried this, but I still get errors
line 24, in get_learned_conditioning
"original_size_as_tuple": torch.tensor([height, width], **devices_args).repeat(len(batch), 1),
TypeError: must be real number, not NoneType
I used ChatGPT to correct this function. Funny, I gave it the code and the stacktrace with the error. It said it needs more data and gave me the new code with prints() added. I used that, got some data printed in the console, gave it to ChatGPT and it suggested the working solution! Enjoy:
def hijacked_get_learned_conditioning(model, prompts, steps, *args):
global real_get_learned_conditioning
if not hasattr(model, '__hacked'):
real_model_func = model.get_learned_conditioning
def hijacked_model_func(texts):
weighted_prompts = list(map(lambda t: get_weighted_prompt((t, 1)), texts))
all_texts = []
for weighted_prompt in weighted_prompts:
for (prompt, weight) in weighted_prompt:
all_texts.append(prompt)
if len(all_texts) > len(texts):
all_conds = real_model_func(all_texts)
offset = 0
conds = {'crossattn': [], 'vector': []}
for weighted_prompt in weighted_prompts:
c_crossattn = torch.zeros_like(all_conds['crossattn'][offset])
c_vector = torch.zeros_like(all_conds['vector'][offset])
for (i, (prompt, weight)) in enumerate(weighted_prompt):
c_crossattn = torch.add(c_crossattn, all_conds['crossattn'][i+offset], alpha=weight)
c_vector = torch.add(c_vector, all_conds['vector'][i+offset], alpha=weight)
conds['crossattn'].append(c_crossattn)
conds['vector'].append(c_vector)
offset += len(weighted_prompt)
return conds
else:
return real_model_func(texts)
model.get_learned_conditioning = hijacked_model_func
model.__hacked = True
switched_prompts = list(map(lambda p: switch_syntax(p), prompts))
return real_get_learned_conditioning(model, switched_prompts, steps, *args)