VfBfoerst
VfBfoerst
Speaking of [litellm](https://github.com/BerriAI/litellm), I got it to work with my open-webui and it handles loadbalancing very well (tested with 2 GPUs and 4 Ollama Instances). The only "problem" which appeared...
> Speaking of [litellm](https://github.com/BerriAI/litellm), I got it to work with my open-webui and it handles loadbalancing very well (tested with 2 GPUs and 4 Ollama Instances). The only "problem" which...
Is there a way to export the usage directly, based on the user/api/team keys, as for example a csv or pdf file to analyze the usage?
Hey @aarnphm, I ran into the exact same behavior. I tried to deploy openllm within a podman container, with registry.redhat.io/ubi8/python-39:latest as base image. Are there plans for containerizing openllm or...
I tried it on the system (RHEL 8.4) outside of the container with a venv (Python 3.9), the readyz endpoint also indicates `Runners are not ready.`. Start-Command: `openllm start opt...
> Hey, I have fixed this issue on main and will release a patch version soon. Is there a commit or a branch where you can see these changes? I...
We updated our packages with `pip3 install openllm --upgrade` and are now using ` openllm-0.1.16`. The behavior did not change.
> Can you try with 0.1.17? We upgraded to 0.1.17, but the behavior did not change.
Thank you for your effort btw 💯
After enabling Debug mode, we found out that the browser seems to replace the colons in the URL leading to a 404 status code: `2023-06-28T11:19:28+0200 [INFO] [runner:llm-opt-runner:1] - "GET http%3A//127.0.0.1%3A8000/readyz...