Sertaç Özercan
Sertaç Özercan
@renatokeys thanks for opening an issue! this is blocked by https://github.com/mudler/LocalAI/issues/1249
@lordofthejars I am not sure if onnx format is supported by LocalAI yet. You can check the compatibility here: https://localai.io/model-compatibility/
docker already has this functionality built-in with platforms (such as `linux/amd64/v2`, `linux/amd64/v3`, etc) https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels This doesn't exist for gpus
fixed with #225
https://github.com/darrenburns/elia provides a nice tui ``` docker run -d --rm -p 8080:8080 ghcr.io/sozercan/llama3:8b echo -e '[[models]]\nname = "openai/llama-3-8b-instruct"\napi_base = "http://localhost:8080"' >> ~/.config/elia/config.toml elia -i "hello world" -m openai/llama-3-8b-instruct ```
@JaydipGabani same here for lint
@olgaliak you can use `docker build -f Dockerfile.cpu` to specify a custom dockerfile, would that work? or is there a case that needs `Dockerfile` as the filename?
@sfxworks thanks for the report! i can repro this behavior. It seems like krew is trying to be clever by adding a `kubectl-` in front of the executable and then...
@myrulezzz what version of kubectl-ai are you using? v0.0.11 added support for custom endpoints I am not sure about textgen ui, but kubectl-ai works with https://github.com/sozercan/aikit or https://github.com/mudler/LocalAI which provides...
Good point @toddysm! I changed it to `acr annotate` in the above example to make it more generic but still a bulk operation for ACR. This can be part of...