eWeb

Results 82 comments of eWeb

Thanks for that, sorry I didn't see that other bug / realise it was the same! Will check back tomorrow or later when I can put it properly through it's...

@sestinj no it was just regular python. The whole file is just a simple fetch of data via API using the requests library, and the part I highlighted and tried...

So we should switch to pre-release for now? Apologies I am not familiar with how long the cycle is for pre-release stuff to make it down to the production release....

I would like to add to this, is there a way we can point to a common repo on our HDD/SSD? Rather than have every LLM app download it's own...

I found my models are going into \wsl.localhost\Ubuntu\usr\share\ollama.ollama\models And the FAQ says we can move this folder with a change to an environment variable. BUT What are these blobs? The...

@dcasota appreciate you're trying to be helpful, I was assuming the devs check these issues once in a while. If you're not a dev no need to answer that you...

Thanks, looks like I will have to try a manual install later instead of the one-click, I would have liked to hear from the devs about it, but too many...

> We only support GPU acceleration of Q4_0 and Q4_1 quantizations at the moment. I can't load a Q4_0 into VRAM on either of my 4090s, each with 24gb. Came...

Thanks for the explanation! So basically I need to wait until this Vulkan thing is... better? I appreciate you want to support all of Mac, AMD and Nvidia, that's a...

What about adding ability to connect to a different API? textgen-webui supports OpenAI API standard, and in fact other LLM apps can connect to textgen-webui for GPU support. Would you...