iChristGit

Results 84 comments of iChristGit

> You need to follow the Windows specific GPTQ 4bit compilation instructions in this issue on GPTQ-for-LLaMA: [qwopqwop200/GPTQ-for-LLaMa#11 (comment)](https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/11#issuecomment-1462643016) I am getting more errors, Is that only because of Visual...

> In another issue someone said that Visual studio 2022 doesn't work and that an older version was needed. > > I can't confirm but it could be worth trying....

> 2019 works. Use native tools command prompt. 30b 4bit takes 40 seconds to respond on my 3090 however, so YMMV on its usability. After i compile it can I...

> You can install it alongside 2022. Just need it for the install but there's no harm in just keeping it in case you need it again I suppose. I...

> Make sure the conda env you install the extension on and the one that runs the server.py is the same and activated I am really a newbie haha I...

I cant even run 30b-4bit Llama with 24Vram 3090Ti+32GB ram, I can run 13B natively. It ignores --disk and --cpu i think, just loading to vram and getting errors. 7B-4bit...

I also have an issue with the latest commit 2023-10-21 12:20:30 INFO:Loading TheBloke_MythoMax-L2-13B-GPTQ... 2023-10-21 12:20:30 ERROR:Failed to load the model. Traceback (most recent call last): File "D:\text-generation-webui\modules\ui_model_menu.py", line 201, in...

> Did you solve this, getting the same error but a different resolution. It got fixed by one of the commits. Are you on latest commit?

Please look into it again! having the ability to upload different images to the canvas without resetting it would be huge!

> The current custom service domain does not seem to support filling in the locally deployed model link, as the request to chat goes through cloud servers, and the cloud...