Ettore Di Giacinto

Results 650 comments of Ettore Di Giacinto

Really liking the idea, adding it to the roadmap :+1:

Also a minimal version of the config required is here https://github.com/jimmykarily/kamaji-demo/blob/8726857bb209dc1b7707eda08c9e379fa4fb1bee/config.yaml.tmpl#L31

e.g. of overriding mount paths: ``` [Service] Type=oneshot RemainAfterExit=yes ExecStart=systemd-sysext merge ExecStop=systemd-sysext unmerge Environment="SYSTEMD_SYSEXT_HIERARCHIES=/usr/:/opt/:/lib/" ```

Thanks for opening the issue @senpro-ingwersenk. Looks a feature that would be interesting indeed to have. Any chance you are up for taking a stab at it? I'd be happy...

If it's supported by vLLM it would work as well here. We do already support audio, video and image processing with vLLM.

This is supported now in LocalAI 3.0, Ultravox can be installed from the gallery

@jespino there are two issues with streaming functions results: - determining when there is _no action_ to do. Currently we let the llm reply with a "no reply" function to...

@jespino however if functions with "stream": true break the client that's a bug - we should just return everything in one event so at least we should be compatible.

good catch, this should be fixed by https://github.com/mudler/LocalAI/pull/3789

mm this looks something more on diffusers side of things - but I see `SINGLE_ACTIVE_BACKEND=true` so local-ai should have killed the backend between the calls. Maybe related to #2720 ?...