stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: " AttributeError: 'FrozenOpenCLIPEmbedderWithCustomWords' object has no attribute 'tokenizer' " when using Aesthetic Gradients Embeddings

Open HighDruidMotas opened this issue 2 years ago • 4 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

When using webui with 768-v-ema.ckpt, the following error occurs when attempting to run with Aesthetic Gradient Embeddings (https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients) selected (using sac_8plus.pt)

Log(removed absurdly long prompt for brevity): Error completing request Arguments: ("[[prompt]]', 'None', 'None', 123, 0, True, False, 1, 1, 16.18, -1.0, -1.0, 0, 0, 0, False, 768, 512, True, 0.42, 0, 0, 0, 0.01618, 1.618, '0.0001618', False, 'sac_8plus', '', 0.1, False, False, False, False, '', 1, '', 0, '', True, False, False) {} Traceback (most recent call last): File "/content/stable-diffusion-webui/modules/call_queue.py", line 45, in f res = list(func(*args, **kwargs)) File "/content/stable-diffusion-webui/modules/call_queue.py", line 28, in f res = func(*args, **kwargs) File "/content/stable-diffusion-webui/modules/txt2img.py", line 49, in txt2img processed = process_images(p) File "/content/stable-diffusion-webui/modules/processing.py", line 430, in process_images res = process_images_inner(p) File "/content/stable-diffusion-webui/modules/processing.py", line 520, in process_images_inner uc = prompt_parser.get_learned_conditioning(shared.sd_model, negative_prompts, p.steps) File "/content/stable-diffusion-webui/modules/prompt_parser.py", line 138, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/content/stable-diffusion-webui/modules/sd_hijack_clip.py", line 219, in forward z1 = self.process_tokens(tokens, multipliers) File "/content/stable-diffusion-webui/extensions/aesthetic-gradients/aesthetic_clip.py", line 205, in call tokenizer = shared.sd_model.cond_stage_model.tokenizer File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1208, in getattr type(self).name, name)) AttributeError: 'FrozenOpenCLIPEmbedderWithCustomWords' object has no attribute 'tokenizer'

Steps to reproduce the problem

Run with sac_8plus aesthetic gradient embedding

What should have happened?

Unselected sac_8plus.pt from CLIP Aesthetic Settings, ran the same prompt, and it all worked fine. Confirmed issue to be with Aesthetic Gradient setting.

Commit where the problem happens

4b3c5bc24bffdf429c463a465763b3077fe55eb8

What platforms do you use to access UI ?

Other/Cloud

What browsers do you use to access the UI ?

No response

Command Line Arguments

No response

Additional information, context and logs

No response

HighDruidMotas avatar Nov 30 '22 21:11 HighDruidMotas

I am having the same issue regardless of which embedding I choose.

ManglerFTW avatar Nov 30 '22 23:11 ManglerFTW

same here

bosbrand avatar Dec 08 '22 15:12 bosbrand

Same here

Topzie avatar Dec 13 '22 12:12 Topzie

I have the same issue. I've put the AG files in the dedicated folder in the plugins folder. It worked fine until I restarted the AI. Now I get the same error than original post mentions. AG files are visible in the list in the user interface as they should, but they generate an error. The AGs ARE showing up. They just give an error. If I use no wildcard, no embedding, it still gives the same error. I tried uninstalling Windcards addon and Tokenizer addon, but it still gives the same error even then.

Error completing request Arguments: ('', '', 'Sci-Fi traveller station', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 0, 0, 0, False, True, False, 0, -1, 0.9, 5, '0.0001', False, 'djzCthuluAngelsV0_0', '', 0.1, False, 0, 0, 384, False, True, True, True, 1, False, False, False, False, '', 1, '', 0, '', True, False, False, 0, 4, 384, 384, False, False, True, True, True, False, True, 1) {} Traceback (most recent call last): File "C:\sd1x5\stable-diffusion-webui\modules\call_queue.py", line 45, in f res = list(func(*args, **kwargs)) File "C:\sd1x5\stable-diffusion-webui\modules\call_queue.py", line 28, in f res = func(*args, **kwargs) File "C:\sd1x5\stable-diffusion-webui\modules\txt2img.py", line 49, in txt2img processed = process_images(p) File "C:\sd1x5\stable-diffusion-webui\modules\processing.py", line 464, in process_images res = process_images_inner(p) File "C:\sd1x5\stable-diffusion-webui\modules\processing.py", line 556, in process_images_inner uc = prompt_parser.get_learned_conditioning(shared.sd_model, negative_prompts, p.steps) File "C:\sd1x5\stable-diffusion-webui\modules\prompt_parser.py", line 138, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "C:\sd1x5\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "C:\sd1x5\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\sd1x5\stable-diffusion-webui\modules\sd_hijack_clip.py", line 219, in forward z1 = self.process_tokens(tokens, multipliers) File "C:\sd1x5\stable-diffusion-webui\extensions\stable-diffusion-webui-aesthetic-gradients\aesthetic_clip.py", line 205, in call tokenizer = shared.sd_model.cond_stage_model.tokenizer File "C:\sd1x5\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1207, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'FrozenOpenCLIPEmbedderWithCustomWords' object has no attribute 'tokenizer'

ARandomUserFromGithub avatar Dec 13 '22 18:12 ARandomUserFromGithub

same here. Are aesthetic gradients being treated like textual inversion embeddings? I liked all the aesthetic gradient settings.

rasamaya avatar Dec 29 '22 22:12 rasamaya

same here. Are aesthetic gradients being treated like textual inversion embeddings? I liked all the aesthetic gradient settings.

I managed to get mine to work but I can't tell how it ended up to work.

All I can tell is that when I installed A.G. on a fresh instance, it started to work as expected on the older instance.

Maybe an update fixed it with git pull... Or maybe updating pip did it. I really can't tell, it's nebulous to me.

ARandomUserFromGithub avatar Dec 29 '22 22:12 ARandomUserFromGithub

i managed to solve this by unistalling and reinstalling anaconda

Gustavsenay avatar Jan 11 '23 20:01 Gustavsenay

Compatibility Warning

2023/01/12: webui's recent commit #50e25362794d46cd9a55c70e953a8b4126fd42f7 refactors CLIP-related code and make wrapper even more deeper, harder to hack in, causing the replace mode also henceforth dead. I finally decide to remove the experimental 'replace' & 'grad' functionality :( 2023/01/04: webui's recent commit #bd68e35de3b7cf7547ed97d8bdf60147402133cc saves memory use in forward calculation, but totally ruins backward gradient calculation via torch.autograd.grad() which this script heavily relies on. This change is so far not pluggable but forcely applied, so we're regrettable to say, prompt-travel's grad mode and part of the replace mode will be broken henceforth. (issue #7 cannot be fixed)

https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel

angrysky56 avatar Jan 12 '23 21:01 angrysky56

Closing as stale.

catboxanon avatar Aug 03 '23 15:08 catboxanon