Pascal
Pascal
> If wanting to have it locally for now, you can skip sending those WebUI defaults: > > Show git diff for adding 'useServerDefaults' button to advanced settings This is...
I make this : https://github.com/ggml-org/llama.cpp/compare/master...ServeurpersoCom:llama.cpp:webui-dynamic-config ``` Implemented dynamic config loading and reset behavior: - On startup, the app checks if localStorage.config exists and is non-empty. If not, it fetches defaults...
OK I get the last ggml submodule from llama.cpp and CUDA build it work on stable-diffusion.cpp :) submodule on this project need update :)
Now we net to checkout to 8b9cc7cdd8a0dcf0176c60c755322c95b5965299 to get the last working ggml because Georgi work on it
I have to do a little cleaning, the patch was not merged properly on my side. -> draft
This PR is now clean, but it was developed after this one: [https://github.com/ggml-org/llama.cpp/pull/16562](https://github.com/ggml-org/llama.cpp/pull/16562)
For the tool call inspector, do you prefer having one spoiler block per tool call, or a single aggregated spoiler wrapping all tool calls in the message? It's rebased/reworked now....
Feel free to dissect the architecture as deep as you want! Component boundaries, store coupling, service layering, anything that smells non-idiomatic. Also, if we end up polishing this feature further,...
And we could even imagine the architecture being reusable later : like having a small JavaScript execution module decoupled from the UI, so the model could actually interact with a...
Includes a very small optimization from the previous PR (scroll listener removal). It landed here intentionally :D