Luna Midori

Results 20 comments of Luna Midori

> > BUILD_GRPC_FOR_BACKEND_LLAMA=ON > > If this is needed, why was not included in the example `.env` file in the release? I also don't see mention of it outside the...

@noblerboy2004 please post your models yaml file for better review

> I am new to this project, too. It looks like you need to set up gpu_layer in the config somewhere, but I don't know how. https://localai.io/howtos/easy-model-import-downloaded/

@noblerboy2004 You have GPU layers set to 0, So 0% of your GPU will be used... Here is a fixed yaml for your easy copy and paste make sure to...

As a note, gpt4all is not fully supported at this time, and the ``open-llama`` model uses ``llama-stable`` not ``llama`` If you would like more info on setting up a model...

@bbaaxx its next on my list! Just need to get WSL working

@timothycarambat this is the streaming bug fix for localai we added. This is the fix working but we need to learn why its dropping the packets

@timothycarambat at least im not the only one with this bug (I am starting to think it maybe the way some routers work...)

@timothycarambat could you add a "no streaming" check mark to the llm screen?