Aaron Pham
Aaron Pham
@bojiang any comments on this?
@bojiang this is different right? #4737 doesn't support custom packages, rather just finding torch or vllm from our predefined packages right? I guess we do parse the python packages for...
Hey there, what is the version of openllm you are using?
Can you also show the output of `openllm models --show-available`?
Oh sorry, this is a hindsight on my part. I will release a quick patch fix for this
Hey all, Please try out with the latest change, hopefully I have smoothen all rough edges
I have fixed this on main
not currently on our high-priority list, but we will consider this as we have some plans for batch inference.
Hi there, any updates so far @asafalinadsg?
@asafalinadsg if you don't have the bandwidth, I'm more than happy to take it over.