Yazan Agha-Schrader
Yazan Agha-Schrader
same here. fyi: i got the same error when try installing/building locally. So it it is not docker/docker-compose specific.. EDIT: on local build, i was able to solve the absl...
@ZacharyDK probably it is the wrong file format (`codellama_codellama-13b-instruct-hf`). hf-format is not supported by llama.cpp. you have to look for gguf instead. the easiest way is to search on TheBloke's...
Ah yes, this is mentioned here https://github.com/ggerganov/llama.cpp/issues/3129#issuecomment-1730090865 as well. One workaround is to disable metal and enable clblast, which not only gives you GPU acceleration (in my case 20x faster...
You could fork the repo and add your own packages, although for me it didn‘t work as mentioned in #44 In case you should try it yourself and are successfull,...
#44 is solved. Now it works well thanks to holzschu‘s reply : )