Panchovix
Panchovix
### Describe the bug Hi there, wondering if it's there a way to load the LLaMA-65B model with 64GB of RAM and 48GB of VRAM total (2x4090), since I'm trying...
### Describe the bug The issue consists that the Python app itself seems to crash. I have 64GB of RAM and 128GB of swap. It worked (and works) without issue...
### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What happened? Updated in the dev branch, and...
### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What happened? Since https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/ff0e17174f8d93a71fdd5a4a80a4629bbf97f822 dev commit, if you...
Hi there, really thanks for all your work! There's a new issue, after an update on the dev branch of the webui. Since https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/ff0e17174f8d93a71fdd5a4a80a4629bbf97f822 dev commit, if you use it...
This adds support for the new NTK RoPE scaling, mentioned in https://github.com/turboderp/exllama/issues/115. "According to this post, this is a method of rope scaling that result in less perplexity loss and...
Hi! Was wondering if autosplit reserve parameter, from exllamav2 could be added as a config when loading a model. This lets you set an amount to reserve VRAM when using...
Hi there, just wanted to say thanks! This extension alongside AutomaticCFG are pretty nice. I ported them over reForge and with the help of @pamparamm, they work absolutely great! https://github.com/Panchovix/stable-diffusion-webui-reForge/tree/dev_upstream/extensions-builtin/reForge-SkimmedCFG