apcameron
apcameron
@jancborchardt Has there been any further progress on this?
@Sopitive What models did you try and did it generate the code expected?
@Sopitive Unfortunately I do not have a GPU to try Exllama but it would be interesting to see if [WizardCoder](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) is able to generate decent code we could use.
Have a look at my reply to this post https://github.com/paul-gauthier/aider/issues/138
@aldoyh Have a look at my comments in https://github.com/paul-gauthier/aider/issues/138
@sammcj Try the tests in the Examples folder. Here is one of them https://github.com/paul-gauthier/aider/blob/main/examples/hello-world-flask.md
I am getting the same error. I then tried to update to the latest ggml format but I still get the error. ``` python privateGPT.py llama.cpp: loading model from models/ggml-model-q4_0.bin...
Following this guide https://github.com/nomic-ai/pygpt4all/issues/71 If it still fails git clone --recursive https://github.com/abdeladim-s/pygptj.git and install this with pip install e .
Did you try updating pygptj as well? If you are running linux dmesg may give you a clue as to where it is failing
> I wrote the ClBlast code for koboldcpp. If there's interest here, it should be easy to port. I could open a PR. I have been playing with it on...