Maciej Grabowski
Maciej Grabowski
Works fine as-is. You can share volume across multiple instances of ollama, so that modle uploaded to one is visible to all of the instances.
I did setup gpu_layers in model file, despite documentation stating in at least 2 places that gpu_layers is only used with cublast, so not for AMD. I swapped between build...
Yeah, same issue on PC with additional info on "car unrecognized". It would be great to have and X or toggle to disable those messages as they make it completely...
Also, removal of "dashcam event" and "unrecognized car" event spawns "Flowpilot not available" message, which also takes half of screen... You need to edit some .java file to hide those...
Will do. Thanks for your hard work!
Overall, current architecture make it unhostable for external access - communication between services needs to be both publicly available and unencrypted.
Hitting same issue, but it seems that it may be related to changes in llama.cpp, not Local AI itself. Perhaps this: https://github.com/ggml-org/llama.cpp/commit/898acba6816ad23b6a9491347d30e7570bffadfd?
> What I ended up with now is the vaultwarden server which unfortunately doesn't replicate all of bitwarden's API functionality, but only client-side. Which means that I have to create...
Same here.
Seems like open-webui is incompatible with docling-serve v12 and v13. Last working version is v11.