vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Feature]: Expose option to load new model weights from disk

Open edbeeching opened this issue 10 months ago • 2 comments

🚀 The feature, motivation and pitch

In an async RL setting, we often want to perform fast generation with a vllm endpoint on a separate node and occasionally sync model weights from disk. It would be good if this option was available on the vllm endpoint.

Alternatives

SGLang already exposes this option: https://docs.sglang.ai/backend/native_api.html#Update-Weights-From-Disk

Additional context

No response

Before submitting a new issue...

  • [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

edbeeching avatar Feb 05 '25 09:02 edbeeching

Hi @edbeeching can you see if this feature achieves what you need? https://github.com/vllm-project/vllm/pull/12084

We have been actively working on adding new features to better support RL workflows

mgoin avatar Feb 05 '25 21:02 mgoin

@mgoin it would be nice to allow unload model(to save gpu memory) /reload model too

#6566 can only unload lora #3281 require to reprogram the entire http interface

ghost avatar Mar 03 '25 15:03 ghost

@mgoin thanks for the pointer to https://github.com/vllm-project/vllm/pull/12084 !

What Ed is referring to is whether this collective can be exposed in the OpenAI-compatible server as a dedicated endpoint. For context, we'd like to spin up a vllm server on N nodes and run training on M nodes. At each training step, we'd like to synchronise the weights so that the vllm server is generating from the current policy.

We did look at https://github.com/vllm-project/vllm/pull/12084, but it seems to require us to adopt ray which adds a rather high amount of complexity to trl

lewtun avatar Mar 18 '25 10:03 lewtun

To add some context, in vLLM version 0.7.x, without using tensor parallel, we can update the weights using this hacking code, but it no longer works with 0.8.x, as we get error AttributeError: 'LLMEngine' object has no attribute 'model_executor'


    def _sync_vllm_weights(self, llm: LLM, state_dict: dict) -> None:
        # only works with vLLM 0.7.*
        model = llm.llm_engine.model_executor.driver_worker.model_runner.model
        model.load_weights(state_dict.items())

michaelnny avatar Apr 05 '25 10:04 michaelnny

@youkaichao could you look at this? I remember there might be a new way to access the model

mgoin avatar Apr 05 '25 16:04 mgoin

AttributeError: 'LLMEngine' object has no attribute 'model_executor'

llm.llm_engine.model_executor only works in v1 when you set VLLM_ENABLE_V1_MULTIPROCESSING=0 environment variable.

for more future-proof and stable interface, i think you should use LLM.collective_rpc, which is available in v1 after https://github.com/vllm-project/vllm/pull/15444 (0.8.3 i think).

youkaichao avatar Apr 13 '25 07:04 youkaichao

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

github-actions[bot] avatar Jul 13 '25 02:07 github-actions[bot]

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

github-actions[bot] avatar Aug 12 '25 02:08 github-actions[bot]