Iory1998
Iory1998
@TheBloke I am still facing the same error on pytorch 2.0.1 with cuda11.8 I managed to install it using: wget https://github.com/PanQiWei/AutoGPTQ/releases/download/v0.2.2/auto_gptq-0.2.2+cu118-cp310-cp310-win_amd64.whl pip install auto_gptq-0.2.2+cu118-cp310-cp310-win_amd64.whl Or you can just download the...
@TheBloke Thank you for your help and contribution to the community.
[UPDATE] I just changed shortened the name of the Dataset so it does not include any spacing in the folder, and that did the trick.
Is it due to the T5-xxl? I was looking for an option to keep model in VRAM in settings when I came across this message:  I wish there was...
I just used ComfyUI, it seems that models are now kept in VRAM and generation with same prompt takes 30s for me while changing the prompt takes 47.47s (AR: 832x1216,...
> > I just used ComfyUI, it seems that models are now kept in VRAM and generation with same prompt takes 30s for me while changing the prompt takes 47.47s...
> The issue is in memory_management. Line 621. > > ``` > if loaded_model in current_loaded_models: > ``` > > the `loaded_model `is always different that what is in `current_loaded_models`....
> Can you provide screenshot of the UI. Here is the requested screenshot. I tried other models Like the Q8 GGUF and I ran into the same issue: 
> [#1050 (comment)](https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050#discussioncomment-10341956) Hi, if you mean the post with the VAE, well, it's the same ae.safetensors from the X-Labs, I just renamed it for convenience. I mean, I've been...
> > > [#1050 (comment)](https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050#discussioncomment-10341956) > > > > > > Hi, if you mean the post with the VAE, well, it's the same ae.safetensors from the X-Labs, I just...