K0IN
K0IN
You can always build your own adapter for any llm, checkout the function ```createLLM```
waiting for https://github.com/denoland/deno/issues/24318
Serve must be launched with docker run --gpus all
But before we merge this please validate that it is working for you, i got some issues (but seems to be my host) since the (locally) build scuda does not...
> But before we merge this please validate that it is working for you, i got some issues (but seems to be my host) since the (locally) build scuda does...
we need debug logging for this. also: could you maybe try my earhtly build in my fork, and see if it works natively https://github.com/K0IN/scuda/blob/main/Earthfile
since earthly is repeatable and can run in GH actions with cross compilation and local so you got a build system that works on your machine and on GitHub, I...
Bei @kevmo314 Checkout my fork https://github.com/K0IN/scuda I build pipelines for prebuilt servers (docker) and client binaries (see Releases). I'm a bit hesitant, to create a pr just now since I'm...
I have this issue nearly every time i use multiple llms (claude and gpt 4.1) on latest version
Any updates, I'm running into the same issue.