fakezeta
fakezeta
Hi @testKKP can you post your model definition? The `core` images does not include python so SentenceTransformers and Coqui are not available. For your reference backends that requires python are:...
Hi @ThaDrone this is a different issue since gpt-4 from AIO images uses llama.cpp backend while it seems that @testKKP is using a python (transformer?) backend. Can you open a...
There is mixtral configuration in the example directory: https://github.com/mudler/LocalAI/tree/master/examples/configurations/mixtral. Download the files in your models directory with the GGUF file. In the example is used `mixtral-8x7b-instruct-v0.1.Q2_K.gguf`: you can choose whatever...
Hi @chino-lu , about your issue the error is here: ``` 8:02AM DBG GRPC(5c7cd056ecf9a4bb5b527410b97f48cb-127.0.0.1:39595): stderr Device 0: NVIDIA T400 4GB, compute capability 7.5, VMM: yes 8:02AM DBG GRPC(5c7cd056ecf9a4bb5b527410b97f48cb-127.0.0.1:39595): stderr llm_load_tensors:...
can you try: ``` curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "gpt-4", "messages": [{"role": "user", "content": "How are you doing?", "temperature": 0.1}] }' ``` or alse `curl http://127.0.0.1:8080/models` to...
it seems it does not find gpt-4 model definition. can you post the content of `/home/gy/local-ai/models` directory and also the output from `curl http://127.0.0.1:8080/models`? Thank you
can you try to put this file in models directory and restart? https://github.com/mudler/LocalAI/blob/3c778b538aee121543ddaeb334cbb7f0e4790d98/aio/gpu-8g/text-to-text.yaml
Hi @ai-bits sorry for the late reply, from the quoted text it seems a curl error: are you sure that there are no spurious character in the request? Just to...
Happy to hear that you finally fixed it. I tought it should be a shell quoting issue but I'm really unexperienced on Windows. Shall the issue be closed?
Good catch. Why don't you open a PR? :)