stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Weighted prompting seems to be broken

Open kalkal11 opened this issue 3 years ago • 2 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

Trying to use weighted prompts, things aren't behaving as expected, then I notice errors in the terminal output

Steps to reproduce the problem

  1. use a weighted prompt like (bob ross:0.8) on his (mighty steed:1.2)

What should have happened?

no error, weighting is used when parsing the prompt.

Commit where the problem happens

685f963

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--xformers

Additional information, context and logs

Traceback (most recent call last): File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict output = await app.blocks.process_api( File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api result = await self.call_function(fn_index, inputs, iterator) File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function prediction = await anyio.to_thread.run_sync( File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "D:\stable-diffusion-webui\modules\call_queue.py", line 15, in f res = func(*args, **kwargs) File "D:\stable-diffusion-webui\modules\ui.py", line 377, in update_token_counter tokens, token_count, max_length = max([model_hijack.tokenize(prompt) for prompt in prompts], key=lambda args: args[1]) File "D:\stable-diffusion-webui\modules\ui.py", line 377, in tokens, token_count, max_length = max([model_hijack.tokenize(prompt) for prompt in prompts], key=lambda args: args[1]) File "D:\stable-diffusion-webui\modules\sd_hijack.py", line 131, in tokenize _, remade_batch_tokens, _, _, _, token_count = self.clip.process_text([text]) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1207, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'Identity' object has no attribute 'process_text'

kalkal11 avatar Dec 11 '22 00:12 kalkal11

seems to be related to #5604

ataa avatar Dec 11 '22 09:12 ataa