morphles
morphles
Just to add, this is still an issue, and quite serious IMO, as unless you can force everything (first, not necessarily possible, second why should you even) into readable the...
I was not aware latent can be extracted like that without vae, frankly image looks quite amazing.
This somehow just convinces me even more that my hi res/multi-sampling idea is good :) I though latients would be something more abstract, and not directly convert-able to pixel values,...
I also have it, maybe dependency got screwed up?
Hm, bu I just download oobabooga, and ran install, shouldn't it choose correct version? Well I'll try to override manually too.
Well I'm on linux so it will be different for me :)
llama.cpp sitting idling ~140W on 7900 XTX ... this is unacceptable . As a side note, power limiting overall is way too convoluted (couldn't even set it up) compared to...
Any news on this? For dual 7900 XTX I'm still getting garbage with hipBLAS build, regardless of model. But on single card it works. I tried the `-DLLAMA_CUDA_PEER_MAX_BATCH_SIZE=0` option, but...
@slaren yeah I know that, and I have no hope of it being fixed on AMD side soon, so I have very little hope in using pytorch with dual cards....
Yeah, understandable :) for now I'm mostly happy with vulkan, and when mixstral is supported, I think I'll have basically no need for HIP build. Still if this somehow progresses,...