float-trip
float-trip
Emad discusses that here: https://old.reddit.com/r/StableDiffusion/comments/x9xqap/ama_emad_here_hello/inqj7dy/ > They do prompt editing on the way in and post processing on the way out basically.
I haven't compared performance as in speed, but it won't affect the output. You'll get the same results from the same seeds
Looked into this for a bit. sharegpt_context.json has the same issue to an extent. It [seems](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) that [everyone](https://huggingface.co/datasets/philschmid/sharegpt-raw) is processing the ShareGPT data using Vicuna's pipeline, including [this part](https://github.com/lm-sys/FastChat/blob/main/fastchat/data/split_long_conversation.py), which...