stable-diffusion-webui-tokenizer
stable-diffusion-webui-tokenizer copied to clipboard
AttributeError: 'FrozenOpenCLIPEmbedder' object has no attribute 'tokenizer'
Can't run it at all, unfortunately. :(
Traceback (most recent call last): File "/scratch/StableDiffusion/AUTOMATIC1111/stable-diffusion-webui/venv/lib64/python3.10/site-packages/gradio/routes.py", line 284, in run_predict output = await app.blocks.process_api( File "/scratch/StableDiffusion/AUTOMATIC1111/stable-diffusion-webui/venv/lib64/python3.10/site-packages/gradio/blocks.py", line 982, in process_api result = await self.call_function(fn_index, inputs, iterator) File "/scratch/StableDiffusion/AUTOMATIC1111/stable-diffusion-webui/venv/lib64/python3.10/site-packages/gradio/blocks.py", line 824, in call_function prediction = await anyio.to_thread.run_sync( File "/scratch/StableDiffusion/AUTOMATIC1111/stable-diffusion-webui/venv/lib64/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/scratch/StableDiffusion/AUTOMATIC1111/stable-diffusion-webui/venv/lib64/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/scratch/StableDiffusion/AUTOMATIC1111/stable-diffusion-webui/venv/lib64/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "/scratch/StableDiffusion/AUTOMATIC1111/stable-diffusion-webui/extensions/stable-diffusion-webui-tokenizer/scripts/tokenizer.py", line 30, in tokenize tokens = clip.tokenizer(text, truncation=False, add_special_tokens=False)["input_ids"] File "/scratch/StableDiffusion/AUTOMATIC1111/stable-diffusion-webui/venv/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1207, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'FrozenOpenCLIPEmbedder' object has no attribute 'tokenizer'
You wouldn't happen to have found a solution to this would you?
Nope :(
the extension "Embedding Inspector" has a "mini tokenizer" that works, but I'm hoping to be able to use this one as well
the extension "Embedding Inspector" has a "mini tokenizer" that works, but I'm hoping to be able to use this one as well
Hey thanks! At least it's something for now :)
I'm also wondering, do you know if this can be used as an accurate estimation of the number of tokens being used?
it's just to the left of the generate button to be more clear