blessedcoolant
blessedcoolant
> One of the problems with the current WebUI is that there is no feedback during the 30s it takes to covert a model. However there are defined steps in...
Can confirm what @psychedelicious reported. I get nearly 3x speeds on an RTX 3080 laptop GPU on Windows too. But I have to note that this speed boost keeps depreciating...
Further testing. I installed `xformers 0.0.17 dev` and let the `_adjust_memory_efficient_attention` along with Torch 2. The generation speeds are even better. I went up from `2it/s` to `6it/s` now for...
I'll run through some tests and see if the degradation persists. But in either case, I think upgrading to Pytorch 2 is a no brainer once we have all the...
I've been using 2.0.0 since it released. But I am also using xformers together with it because I get much faster results. There's obviously determinism issues but those exist with...
The implementation in this PR is fine. The only issue I noticed is that when applying the LoRA, we're still getting a pretty noisy output at the default eta of...
This PR is fine to merge I think. Even the Hyper SDXL paper that utilizes the TCD Scheduler prefers the gamma value to be at 1.0 and as per my...
> Sorry was just curious about the `eta` check, didn't mean to block this Na you're right. Initially I did the eta check coz I thought it was only on...
I think it's better we host them ourselves. The Github downloads can take ridiculously long sometimes for no apparent reason. The UI use case would be currently right click on...
> Did we consider just using the transformers implementation (https://huggingface.co/docs/transformers/v4.43.3/en/model_doc/depth_anything_v2)? Is there a reason that we are choosing to maintain a copy of the model source? Did not consider coz...