Eric Curtin

Results 479 comments of Eric Curtin

> Sorry to be dense, but I can't tell if you're reiterating that whisper.cpp does not provide interactive support or if you're saying that you do not like the concept...

This PR is a perfect example of why we don't just clone main/master: https://github.com/containers/ramalama/pull/474

> Yes, failing each time whisper.cpp is updated is perhaps a better solution than blindly updating along with it. Anyway, back to the meat of my question... > > >...

> > this issue is open for someone to complete it :) > > Great. I'm just having trouble understanding what "complete it" means to you. That's why I'm asking...

stdin seems fine to me. Also we can autodetect when there is stdin coming in, it's better for usability, you don't need the explicit '-' then, although we can have...

alias rl="ramalama" springs to mind. This isn't incredibly hard to implement as a separate project. This is basically implement my own custom shell. The thing is when you execute run,...

@rhatdan question, do we want to allow people to push Ollama and/or Docker AI formats to push to random OCI registries as is without conversion or force the user to...

Linked issue: https://github.com/containers/podman/issues/25758 @runcom looking into something similar.

I think you found your answer in the above doc: "vLLM has experimental support for macOS with Apple silicon. For now, users shall build from the source vLLM to natively...

A general issue around vLLM should be opened if not already, could do with some Containerfile work. llama.cpp is more suitable for macOS runtime today