stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Theory: Hidden special characters are being added to prompts, causing different results for each GPU

Open 2u843yt385592yjh opened this issue 2 years ago • 14 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

It seems to be widely known that different GPUs cause different results. Comparing results from a 2060 with a 3060... I have landed on a theory that hidden special characters are being generated and injected into prompts. WHY Do special characters even affect prompts to begin with? Shouldn't they just be banned?

Steps to reproduce the problem (MetaData attached)

Using this model https://civitai.com/models/4823/deliberate But it happens with any other. The SEED and PROMPTS below are all exactly the same

2060 result

Deliberate (4)

3060 result

00181-20230213171947,2343937302

3060 result + some random special characters , <'_.+$&!?.*^> into positive prompt

00183-20230213172411,2343937302

It doesn't look the same, but still becomes VERY similar

Different combinations of special characters sometimes almost nail the 2060 result.

What is going on here?

WHY Do special characters even affect prompts to begin with? Shouldn't they just be banned?

Commit

https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/3715ece0adce7bf7c5e9c5ab3710b2fdc3848f39

2u843yt385592yjh avatar Feb 13 '23 16:02 2u843yt385592yjh

Hmm.... you're onto something here. I'm quite concerned about this.

MikosDrafle avatar Feb 13 '23 16:02 MikosDrafle

Problem probably lies in textual inversion, or the prompt formatting scripts. Don't know why, but it looks like it to me.

MikosDrafle avatar Feb 13 '23 16:02 MikosDrafle

Take a look at prompt_parser.py in the modules folder (not repositories)

MikosDrafle avatar Feb 13 '23 16:02 MikosDrafle

is --xformers on? That can cause slightly randomized results

aphix avatar Feb 13 '23 17:02 aphix

No, this has nothing to do with launch options, compatibility settings, commit versions or whatever you may come up with. I can tell you from extensive testing of all possible setting combinations and reinstallations. The ONLY way to get a result accurately similar to the 2060 was by adding special characters

2u843yt385592yjh avatar Feb 13 '23 17:02 2u843yt385592yjh

my gens randomly changed over the weekend and i couldnt figure out why, it was driving me crazy going through every possible process of elimination to reproduce them accurately, because no matter what type of meltdown we've had in the past ive always been able to reproduce seeds/prompts accurately once back up and running. couldnt do it this time. hidden special characters seems like a promising theory to investigate, never thought to look at the code cause i dont code 🤷‍♂️ im on colab btw, using the T4 edit: ive also noticed that the gen info printed out in the txt files with each run has been wrong or inconsistent or just strait up missing stuff it used to put in each gen. like it stopped printing out the batch size and location for instance. network stuff is wrong like which lora im using etc. its odd

rektobot avatar Feb 13 '23 17:02 rektobot

There's currently a bug saving a random seed roll to the file which is wrong, I made a report here screenshots attached confirmed on my system rendering and trying to render again, not xformers related etc, so not different gpu related as that may be a different issue. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7808

Revenger avatar Feb 13 '23 22:02 Revenger

4 fingers, nice, youre a ahead of the game m8. seed issues are always fun. the first thing i did was take xformers out of the args and ran some batchs, figured it was like the recent karras rescheduling which was a nice unexpected morning nutpunch. i was able to recover most of the gens when that happened. i still havent been able to recover anything from this. nothing looks remotely the same as i left it. its not bad looking, its just completely wrong. I deleted json configs, all the old stored hashes, dumped all extensions, went a few commits back, nothing worked. learned that 27 batches of 3 gens is vastly different than 9 batches of 9 gens. like jumping clip skip from 2 to 6, its another dimension basically. i dont mind seeds but using them as a 'save point' is such a dogshit solution tbh.

rektobot avatar Feb 13 '23 23:02 rektobot

SOLUTION:

noise generated using the randn function is not always the same on gpu, cpu should guarantee a similar result for everyone, i don't know though.

found that by digging through modules/processing.py

that though leads me to devices.py, where the randn function is located. changing it to cpu i get a different result LOL

MikosDrafle avatar Feb 14 '23 07:02 MikosDrafle

@MikosDrafle okay but what is this "noise" you are talking about, and can it be forced into the prompt?

Say you send me your "noise" value, can I use it?

2u843yt385592yjh avatar Feb 14 '23 08:02 2u843yt385592yjh

It can also be more than the previous listed characters. I went back to old folders with generation info and noticed unicode characters in the prompt that I couldn't even see in the browser (shows up in notepad as a square made of dashes). Might have been escaping things for some reason.

78Alpha avatar Feb 14 '23 15:02 78Alpha

have you made sure that it isn't due to using different versions of the model? Looking at the parameters in the images I see that the model hash is different between both. For what it's worth, my 4090 exactly reproduces your 2060 image, also using the model with the same hash as you did for that image.

dresdenium avatar Feb 15 '23 19:02 dresdenium

@MikosDrafle I think someone figured it out... https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/7809#discussioncomment-4999160 This might not be because of "different gpus" but just because "different amount of cuda cores" So it could be easy... maybe

2u843yt385592yjh avatar Feb 16 '23 22:02 2u843yt385592yjh