stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Feature Request]: include metadata that drastically changes output
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
- Include additional metadata in output images.
- Suggested metadata include: a) torch version b) python version (not sure if it affects seed or something) c) maybe CUDA or NVIDIA driver version if it can affect final result d) other drastic options which developers might know better than anyone 😜
- (Optional) would be nice to highlight in
#ff0000
red on PNGInfo tab "drastic discrepancies", like torch version, or python version, or something else like missing lora:xxxx
Proposed workflow
- PNGInfo -> txt2img
- Click Generate button
- Output should have data necessary to generate image with "visually close enough" accuracy. (I don't know how to determine, just don't want final images to differ completely)
- repeating steps 1-3 should give consistent results if taking own images next month, or month after, etc.
Additional information
As for today I can't reproduce images that I've generated 1 month ago even with recreated setup.
It took me quite a while to figure out that "torch" library upgrade from 1.13 to 2.0 drastically changes outputs. I've got 2 versions locally at the moment to test it more.
As a user it is very confusing, that PNGInfo -> txt2img flow doesn't work for images I had generated earlier.
I found that embeddings and textual inversions can be tricky.
For example:
when image is generated with certain
Scenario:
- "embeddingA" is not present in system
- generate image with prompt and mention "embeddingA"
- install "embeddingA" into stable-diffusion-web-ui
- drop image to PNG Info -> txt2img -> Generate
Certain VAE also change image. Would be nice to have it explicitly listed in metadata.
Adding to this: would also be nice to know which cross-attention opt. method was used (xformers, doggettx, etc)
@nVitius funny enough I found my own post exactly searching for "Cross attention optimization" changes in recent releases. Short answer: Doggettx is selected Automatically, I haven't changed this setting manually before. (xformers disabled)
I'm struggling last week to reproduce at least 1 image from Civitai, which was not a problem before. I was not updating A1111 for 1 month, then decided to go forward to v1.5.1 and boom... 🤯😶🌫️🤕nothing works as expected anymore
I have multiple backup copies of A1111 to test different versions, and:
- I still can reproduce images from old prompts using old version
- I struggle whole week to reproduce anything with v1.5.1 and it doesn't work even on clean installation. 😔
I wish it would be possible to find out what is the cause of the difference and a add it to metadata.
I'm going to close this because as mentioned in https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12449#issuecomment-1672941031, this is already being done. Torch version theoretically should not affect output, but version number of the webui is being recorded in the infotext, and with that can be extrapolated which Torch version the webui was using at the time if necessary. Python version will not change results. VAE and all extra networks (TI embeddings, hypernetworks, LoRA, etc) are also added to the metadata as of 1.6.0.
If you can't replicate your own images compared to past versions, and extension updates are also not the cause, feel free to open an issue with more details, including full image metadata. Any seed breaking changes are also documented. For reference, I can re-generate images back from September identically.
Civitai images are notoriously flaky to reproduce, as hardware differences do factor into those, so please only open replication issues for your own images.