diffusionbee-stable-diffusion-ui
diffusionbee-stable-diffusion-ui copied to clipboard
Copy Paste Prompt from History = Prompt Too Long!
I remember using the Tensorflow version of DiffusionBee on Mac, and copy pasting my history, somewhat I kept getting "Prompt Too Long".
My feeling is that when copy pasting on that, I also copy paste some hidden characters or something. Try copy pasting prompt into Notes and you can see the issue.
It might be a bug.
NOTE: The prompt text field is also too small.
Can you provide a prompt that's long enough to cause this error?
Many thanks for making this app - I've just got this issue too by copy pasting (prompt pasted below) - will try enzyme69's suggestion when the models have downloaded
a beautiful empress portrait, with a brilliant, impossible striking big cosmic galaxy headpiece, clothes entirely made out of cosmos chaos energy, symmetrical, dramatic studio lighting, rococo, baroque, jewels, asian, hyperrealism, closeup, D&D, fantasy, intricate, elegant, highly detailed, digital painting, artstation, octane render, 8k, concept art, matte, sharp focus, illustration, art by Artgerm and Greg Rutkowski and Alphonse Muchaa
Update : pasting to / from plain text file didn't work - reducing prompt to three lines worked
a beautiful empress portrait, a beautiful empress portrait, hyperrealism, closeup, D&D, fantasy, intricate, elegant, highly detailed, digital painting, artstation, octane render, 8k, concept art, matte, sharp focus, illustration, art by Artgerm and Greg Rutkowski and Alphonse Muchaa
but produced very different results - is it possible to extend the prompt text field?
This also occurs to negative prompt as well, some custom models require a tediously long negative prompt in order to function properly, it would be a life saver if this could be fixed/improved!
How can this be solved? Having the same issues. Not enough space for both prompt and negative prompt.
For example, this prompt makes this problem too: https://arthub.ai/art/23835 (just tried one of the first examples from this hub).
The token length check is here: https://github.com/divamgupta/diffusionbee-stable-diffusion-ui/blob/c33afadd66dac6319b46b5f4446abf11de24c813/backends/stable_diffusion_tf/stable_diffusion_tf/stable_diffusion.py#L103-L108
Some methods of getting around the 77 token limitation:
- https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#infinite-prompt-length
- https://old.reddit.com/r/StableDiffusion/comments/xr7wwf/sequential_token_weighting_invented_by/
- https://www.kaggle.com/code/blackroot/expanding-the-token-limit-in-stable-diffusion/log
When searching around, I also saw folks saying this is a limitation of the clip model, not stable diffusion, and another clip model can be used.