[Bug]: AttributeError: 'NoneType' object has no attribute 'process_texts'
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
when I start the UI, I get this error:
Launching Web UI with arguments: --xformers --deepdanbooru --no-half --ui-debug-mode --medvram Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Startup time: 18.3s (import torch: 6.6s, import gradio: 3.9s, import ldm: 1.7s, other imports: 3.5s, scripts before_ui_callback: 1.8s, create ui: 0.6s, gradio launch: 0.2s).
Traceback (most recent call last):
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "E:\AI_Anime_SD\webui\modules\call_queue.py", line 15, in f
res = func(*args, **kwargs)
File "E:\AI_Anime_SD\webui\modules\ui.py", line 286, in update_token_counter
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "E:\AI_Anime_SD\webui\modules\ui.py", line 286, in
Steps to reproduce the problem
- I open webui-user.bat
- ui loads
- I type "u" or any 1 letter.
- "ERROR" appears in the text field. Console logs to what I've previously posted error in different .py scripts.
What should have happened?
SD Web UI should have generated a prompt image.
Commit where the problem happens
42687593
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
--xformers --deepdanbooru --no-half --ui-debug-mode --medvram
List of extensions
I deleted control net all together. I am not using any extra extensions but the default ones.
Console logs
[notice] A new release of pip available: 22.2.1 -> 23.0.1
[notice] To update, run: E:\AI_Anime_SD\webui\venv\Scripts\python.exe -m pip install --upgrade pip
Installing gfpgan
Installing clip
Installing open_clip
Installing xformers
Cloning Stable Diffusion into E:\AI_Anime_SD\webui\repositories\stable-diffusion-stability-ai...
Cloning Taming Transformers into E:\AI_Anime_SD\webui\repositories\taming-transformers...
Cloning K-diffusion into E:\AI_Anime_SD\webui\repositories\k-diffusion...
Cloning CodeFormer into E:\AI_Anime_SD\webui\repositories\CodeFormer...
Cloning BLIP into E:\AI_Anime_SD\webui\repositories\BLIP...
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments: --xformers --deepdanbooru --no-half --ui-debug-mode --medvram
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 18.3s (import torch: 6.6s, import gradio: 3.9s, import ldm: 1.7s, other imports: 3.5s, scripts before_ui_callback: 1.8s, create ui: 0.6s, gradio launch: 0.2s).
Traceback (most recent call last):
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "E:\AI_Anime_SD\webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "E:\AI_Anime_SD\webui\modules\call_queue.py", line 15, in f
res = func(*args, **kwargs)
File "E:\AI_Anime_SD\webui\modules\ui.py", line 286, in update_token_counter
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "E:\AI_Anime_SD\webui\modules\ui.py", line 286, in <listcomp>
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "E:\AI_Anime_SD\webui\modules\sd_hijack.py", line 219, in get_prompt_lengths
_, token_count = self.clip.process_texts([text])
AttributeError: 'NoneType' object has no attribute 'process_texts'
Additional information
I'm tired of updating every week and SD web ui crashes, immediately. I'm really doing my best to keep up. I use SD 1.5
This seems to happen for me after using the LDSR upscaler, although the traceback is slightly different and ends with this:
File "C:\stable-diffusion-webui\modules\sd_hijack.py", line 219, in get_prompt_lengths
_, token_count = self.clip.process_texts([text])
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Identity' object has no attribute 'process_texts'
Think it has something to do with the cond_stage_config for LDSR being targeted to torch.nn.Identity in this config.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/22bcc7be428c94e9408f589966c2040187245d81/extensions-builtin/LDSR/scripts/ldsr_model.py#L18
@3dcinetv In your case, isn't this happening because you're launching with --ui-debug-mode? The CLIP tokenizer isn't loaded because the model isn't loaded when launching in UI debug mode. From the argument help text in the code:
Don't load model to quickly launch UI https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/22bcc7be428c94e9408f589966c2040187245d81/modules/cmd_args.py#L91
I have the exact same behavior as @catboxanon. Once I use the LDSR upscaler, this error appears every time I change the prompt. The prompt also shows error instead of the prompt length:

Traceback (most recent call last):
File "/media/daten2/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/media/daten2/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/media/daten2/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/media/daten2/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/media/daten2/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/media/daten2/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/media/daten2/stable-diffusion-webui/modules/ui.py", line 265, in update_token_counter
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "/media/daten2/stable-diffusion-webui/modules/ui.py", line 265, in <listcomp>
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "/media/daten2/stable-diffusion-webui/modules/sd_hijack.py", line 219, in get_prompt_lengths
_, token_count = self.clip.process_texts([text])
File "/media/daten2/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Identity' object has no attribute 'process_texts'
I have same issue after use LDSR.
File "H:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "H:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "H:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1022, in call_function
prediction = await anyio.to_thread.run_sync(
File "H:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "H:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "H:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "H:\stable-diffusion-webui\modules\call_queue.py", line 15, in f
res = func(*args, **kwargs)
File "H:\stable-diffusion-webui\modules\ui.py", line 279, in update_token_counter
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "H:\stable-diffusion-webui\modules\ui.py", line 279, in <listcomp>
token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0])
File "H:\stable-diffusion-webui\modules\sd_hijack.py", line 219, in get_prompt_lengths
_, token_count = self.clip.process_texts([text])
File "H:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Identity' object has no attribute 'process_texts'
same issue +1
same issue here
same issue here, showed up after upgrade, pulling 72cd27a fixes the issue
I have the same issue. Everything looks fine until I start writing prompts.
Same issue. Works fine the first time the ui is started after installation but once closed and relaunched this error comes up. Deleting and reinstalling shows the same behaviour, woking the first time and stops working subsequent times.
Similar problem when trying to use the LDSR upscaler.
URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)>
What could be the problem? How can I fix it? A clean installation didn't help, the problem is not on my end.
Similar problem when trying to use the LDSR upscaler.
URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)>What could be the problem? How can I fix it? A clean installation didn't help, the problem is not on my end.
That's not very similar...
Similar problem when trying to use the LDSR upscaler.
URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)>What could be the problem? How can I fix it? A clean installation didn't help, the problem is not on my end.That's not very similar...
I was able to solve this problem. As for your error, it seems to be massive and related to some bug on the webui side. It helps me to reload UI and then the error disappears. It does not always show up and it is not clear what is causing it.
Similar problem when trying to use the LDSR upscaler.
URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)>What could be the problem? How can I fix it? A clean installation didn't help, the problem is not on my end.That's not very similar...
I was able to solve this problem. As for your error, it seems to be massive and related to some bug on the webui side. It helps me to reload UI and then the error disappears. It does not always show up and it is not clear what is causing it.
I noticed that I can ignore the error and it works just as it should!
Same issue
Please upgrade to the current master (1.3.0), and open a new issue if this still persists.
Also, don't use --ui-debug-mode unless you're a developer and you know how to use it.