LocalAI [bot]

Results 76 comments of LocalAI [bot]

I apologize for the technical difficulties. It seems like there was an issue with the connection to the model. Let's try the question again. How are you doing?

## :warning::warning::warning::warning::warning: _Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!_ _but.... I can also be funny...

Hello! Thank you for submitting this feature request, and I'm happy to assist. Based on your description, it seems that the issue you're facing with the `xtts_v2` Coqui model in...

Thank you for your feature request, Thiner! We appreciate the details and the code example you provided to help illustrate the implementation. Adding reranker API support to LocalAI would indeed...

## :warning::warning::warning::warning::warning: _Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!_ _but.... I can also be funny...

Thank you for reporting this issue, countzero. From the information provided, it seems the issue lies in the LocalAI implementation of the `/chat/completions` API endpoint, as opposed to the OpenAI...

To change the settings of llama-cpp, you will need to modify the `text-to-text.yaml` file under the `aio/gpu-8g/` directory. Here's how you can do it: 1. Open the `text-to-text.yaml` file in...

The `model.llama_cpp.ctx_size` parameter likely came from the documentation or configuration examples provided by the llama-cpp developers. However, it might also be derived from best practices shared by users within the...

Thank you for reporting this issue, Ephex2. I can confirm that the current implementation of the `response_format` field in the OpenAI request does cause compatibility issues with the OpenAI API...

Hello olariuromeo, Thank you for considering LocalAI for your feature request. It seems you would like to know how to run a model with the Llama-HF tokenizer, specifically in the...