P O

Results 4 comments of P O

I am seeing the problem using an M1 Max Macbook Pro / Ventura 13.3.1 / Docker 4.17.0 (99724). ``` 2023-04-26 11:17:05 localai-api-1 | llama.cpp: loading model from /models/ggml-gpt4all-j 2023-04-26 11:19:07...

After using make build then make run: `curl http://localhost:8080/v1/models {"object":"list","data":[{"id":".DS_Store","object":"model"},{"id":".devcontainer","object":"model"},{"id":".dockerignore","object":"model"},{"id":".env","object":"model"},{"id":".git","object":"model"},{"id":".github","object":"model"},{"id":".gitignore","object":"model"},{"id":".vscode","object":"model"},{"id":"Dockerfile","object":"model"},{"id":"Earthfile","object":"model"},{"id":"LICENSE","object":"model"},{"id":"Makefile","object":"model"},{"id":"README.md","object":"model"},{"id":"api","object":"model"},{"id":"charts","object":"model"},{"id":"examples","object":"model"},{"id":"go-gpt2","object":"model"},{"id":"go-gpt4all-j","object":"model"},{"id":"go-llama","object":"model"},{"id":"go.mod","object":"model"},{"id":"go.sum","object":"model"},{"id":"local-ai","object":"model"},{"id":"main.go","object":"model"},{"id":"models","object":"model"},{"id":"pkg","object":"model"},{"id":"prompt-templates","object":"model"},{"id":"renovate.json","object":"model"},{"id":"tests","object":"model"},{"id":"","object":"model"}]}%` `curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "ggml-gpt4all-j", "messages": [{"role": "user", "content": "How are you?"}], "temperature": 0.9 }' {"error":{"code":500,"message":"llama:...

Thanks @MartyLake `./local-ai --models-path models/` Got things working for me. Although I still noticed the model had 'unexpectedly reached end of file' using the make build process. ``` curl http://localhost:8080/v1/models...

I am using the ggml-alpaca-7b-q4 model now and for my usage it works so good on my M1 Max 32GB Macbook Pro.