Not usable since the new driver update
Just updated to the 25.3.1 driver and now after every generation, i got the Adrenalin error pop telling me the delay just got passed And Stable diffusion only generate grey picture/noise picture
If downgrade to 25.2.1 again, it's working again
Same situation on RX 7800 XT
Same situation 7900xtx upgraded, getting weird OOM errors in hires fix or adetailer.
But it seems to permanently break my whole machine bc now I reinstalled all drivers rolled back to old version of and still no work wtf
For me it works on a 7900XTX HIP SDK 6.2 Zluda 3.9.2
I'm on a 7900 XT and mine works but it is painfully slow (20 mins for a single 512x768 Flux 1D generation at 22 steps, compared to a couple mins max what I used to get before) HIP SDK 6.2 Zluda 3.9.2 (Nightly) AMD Driver: 25.3.1 Command ARGs= --api --zluda --use-zluda --theme dark --no-download-sd-model --attention-quad --all-in-fp32
(Side note: If anyone knows better ARGs than what I'm using, please share)
(Side note: If anyone knows better ARGs than what I'm using, please share)
Well. I don't think you need --all-in-fp32 . Try fp16 and if everything works you should stay with this.
(Side note: If anyone knows better ARGs than what I'm using, please share)
Well. I don't think you need --all-in-fp32 . Try fp16 and if everything works you should stay with this.
It's faster without it but the terminal and webui freezes after a few generations.
I'm on a 7900 XT and mine works but it is painfully slow (20 mins for a single 512x768 Flux 1D generation at 22 steps
Make sure your not using the 23gb large flux model. Instead use the quantized (GGUF) models of flux. They are smaller and run much faster. Go for Q8_0 or Q6_0 ones. Also add --cuda-stream this will make it run much faster. Dont use --all-in-fp32
(Side note: If anyone knows better ARGs than what I'm using, please share)
Well. I don't think you need --all-in-fp32 . Try fp16 and if everything works you should stay with this.
It's faster without it but the terminal and webui freezes after a few generations.
Thats very wierd. Either driver issue or hip sdk. You tried to change driver ? Maybe a little older ?
Hi could this be related to the recent issues on latest? Normal generation works fine with batches of 4. Gets out of memory saying pytorch tried to alloc ~32GB of vram when it tries hires fix. Could do 4x batches w hires fix and adetailer easily and fast in the past, now completely broken, please help!
https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu-forge/issues/105