Xiaochang Wu
Xiaochang Wu
> were you able to run this locally? does it work? I am just looking forward to see how to update this project to support latest vllm I am working...
> Hey all, > > I also have similar updates on a fork however I've struggled to get feedback from the maintainers to work out how to proceed here. I...
> @xwu99 comment says `vllm is installed seperately from source for now` but I don't see anywhere it being installed? You just need to follow vLLM official guide.
> @xwu99 I saw `worker_use_ray=False`, is that means your implements cannot support model parallelization? I mean `world_size > 1`? vLLM for CPU does not support tensor parallelism yet. This PR...
closed the issue.