[bug]: AttributeError: 'CLIPTokenizer' object has no attribute 'max_model_input_sizes' when using Textual Inversion
Is there an existing issue for this problem?
- [X] I have searched the existing issues
Operating system
Linux
GPU vendor
Nvidia (CUDA)
GPU model
No response
GPU VRAM
No response
Version number
461e857824c2cb5e2f6126dd1b056a1b2360c701
Browser
ff
Python dependencies
No response
What happened
When generating with a TI:
File "D:\git\InvokeAI-test\invokeai\backend\textual_inversion.py", line 96, in expand_textual_inversion_token_ids_if_necessary
max_length = list(self.tokenizer.max_model_input_sizes.values())[0] - 2
AttributeError: 'CLIPTokenizer' object has no attribute 'max_model_input_sizes'
transformers 4.40.0 made breaking changes (for us), see section about "Static pretrained maps": https://github.com/huggingface/transformers/releases/tag/v4.40.0
max_model_input_sizes was removed from CLIPTokenizer in this commit: https://github.com/huggingface/transformers/pull/29112/files#diff-85b29486a884f445b1014a26fecfb189141f2e6b09f4ae701ee758a754fddcc1L1564-L1565
What you expected to happen
TIs work
How to reproduce the problem
No response
Additional context
No response
Discord username
No response
@lstein @RyanJDick Any ideas?
I don't know if this is correct but it did get me up and running. I made this following change as it looked like the right kind of value to me.
invokeai\backend\textual_inversion.py ln 96
max_length = list(self.tokenizer.max_model_input_sizes.values())[0] - 2
to
max_length = self.tokenizer.model_max_length - 2
Thanks for investigating, @skunkworxdark . That change looks good to me. I opened a PR here with that change and some improvements to the documentation: https://github.com/invoke-ai/InvokeAI/pull/6449