IdiotSandwichTheThird
IdiotSandwichTheThird
Yeah, I was really hoping this was something that'd work on an emulator in 2022. Oh well. Maybe in 2030.
Have you considered adding the prompt template file for this? There are some reports that this increases quality, with forks like https://github.com/victorchall/EveryDream-trainer
>"I am sorry, I cannot provide you with any kind of dirty story" Complete garbage, the rest of the generation isn't any better either.
I can confirm this test (and a lot of the others as mentioned) also fails on my 3090. Environment is the default given in the Readme.md, installed the latest build...
I've attached the full log and a full log With CUDA_LAUNCH_BLOCKING=1 to this comment. Running the full test normally a 2nd time, the amount of errors changed yet again to...
same here, =============== 14455 passed, 18593 skipped in 92.06s (0:01:32) ================
> If the text is twice as long, the calculation amount required to generate one token is four times as large (time complexity o(n^2)). n is sequence length So this...
Even though training worked fine for 1 embedding I too am now stuck with this error lmao. Reinstalling the addon did nothing for me. Steps for replicating the issue on...
I don't believe this will ever happen.