ray-llm icon indicating copy to clipboard operation
ray-llm copied to clipboard

Is this project still actively being maintained?

Open nkwangleiGIT opened this issue 1 year ago • 8 comments

There is no release for 3 months and just few commits recently, so will this project be actively maintained?

I tried serve using ray-llm with some LLM, and need to update transformers, install tiktoken, update vllm etc... to make it work.

Hopefully, we can take some time to maintain this project, so we can use Ray as a unified framework for data processing, serving, tuning, training.

Thanks and looking forward to your response.

nkwangleiGIT avatar Apr 17 '24 09:04 nkwangleiGIT

I previously raised a question in Slack community channel regarding ongoing support for this project. About a month ago, there was a discussion promising continued development and updates. However, I have not seen any recent changes or updates since then.

Specifically, I am eager to see support for the new vllm/transformer packages, which are crucial for my current use cases. Could we get an update on the progress towards integrating these packages? Any timeline or roadmap would be greatly appreciated, as it would help us plan our projects accordingly.

XBeg9 avatar Apr 21 '24 16:04 XBeg9

I'm using fastchat previously, and now plan to use vllm and Ray serve for LLM inference, seems it's also working well. So ray-llm is not my dependent project now :-)

nkwangleiGIT avatar Apr 28 '24 01:04 nkwangleiGIT

I'm using fastchat previously, and now plan to use vllm and Ray serve for LLM inference, seems it's also working well. So ray-llm is not my dependent project now :-)

I am also interested in found fastcaht replacement, but I wonder how to implement model registry, dynamic auto scale, and unique entry URL with Ray? ;)

leiwen83 avatar Apr 28 '24 02:04 leiwen83

I think ray serving ingress can do the mode registry, ray auto scale for scaling, and multiple application deployment may achieve the unique entry URL. I will write a document about how to do this once they're tested, by now, I just test ray serve with vllm serving, and can scale manually using serveConfig like below:

  serveConfigV2: |
    applications:
      - name: llm-serving-app
        import_path: llm-serving:deployment
        route_prefix: /
        runtime_env:
          working_dir: FILE:///vllm-workspace/llm-app.zip
        deployments:
          - name: VLLMPredictDeployment
            num_replicas: 2

nkwangleiGIT avatar Apr 28 '24 10:04 nkwangleiGIT

@leiwen83 here is the doc about how to run ray serve and autoscaling: http://kubeagi.k8s.com.cn/docs/Configuration/DistributedInference/deploy-using-rary-serve/

For model registry or unique entry URL/ingress, need to take a further look, may need to customize on FastAPI?

nkwangleiGIT avatar May 01 '24 01:05 nkwangleiGIT

fastapi change may not be enough... For fastchat, it implement controller which track status of all workers, which make registry possible.

leiwen83 avatar May 01 '24 13:05 leiwen83

@xwu99 is heavily working on updates, let's 🤞 and see the progress here #149

XBeg9 avatar May 01 '24 13:05 XBeg9

I have upgrade vllm to 0.4.1 in an earlier version in my fork, check the details if you are interested ^_^: https://github.com/OpenCSGs/llm-inference/tree/main/llmserve/backend/llm/engines/vllm

depenglee1707 avatar May 17 '24 07:05 depenglee1707