tk
tk
Running on linux with RX7900XT 20GB of vram using tiny.en with ollama llm loaded and tiny.en for fatser whisper with 6GB of vram to spare and I have same error....
Had same error with 5.7 rocm [this](https://www.reddit.com/r/StableDiffusion/comments/1fzcx7y/automatic1111_torch_rocm_57_is_giving_403/) reddit post fixes it by removing nightly from url
I am using this `gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() torch.cuda.synchronize()` as sometimes it helps a bit, but even with tiny.en model for me vram usage on RX7900XT from 12GB on idle...
I am just a random person, but the whisper models not only would have to be loaded to cpu which would slow everything considerably, but also 2GB of ram probably...