vllm
vllm copied to clipboard
[V1] Support `LLM.apply_model`
This enables a bunch of tests to be run in V1
@youkaichao can you review this?
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
And also @mgoin since this touches the quantization tests
@njhill I'm unable to get tests/models/multimodal/generation/test_qwen2_vl.py::test_qwen2_vl_multiple_image_embeddings_input[10-128-half-size_factors1-Qwen/Qwen2-VL-2B-Instruct] to pass - the output of apply_model only has the tensor's dtype and shape rather than the tensor data. I think this is related to the msgspec encoding/decoding logic. Can you help take a look?
A section in the docs or example script would be useful to demonstrate the interface
Currently this is only used for testing purposes. It is basically a thin wrapper over LLM.collective_rpc which is already documented in the RLHF examples.
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @DarkLight1337.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @DarkLight1337.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
@DarkLight1337 seems like https://buildkite.com/vllm/fastcheck/builds/27184/steps/canvas?jid=019763e2-2f1b-4a34-98ad-6ccb6fa9461f failure is not related (probably because of flaky, but we shall wait til run the full CI then)
The failing test is related to the issue I pinged @njhill about
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @DarkLight1337.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @DarkLight1337.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @DarkLight1337.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
+1 @DarkLight1337 @mgoin quite a few tests depend on this
Waiting for @njhill to help debug the failing test
@DarkLight1337 I'm looking at this today
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @DarkLight1337.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
@DarkLight1337 https://github.com/vllm-project/vllm/pull/21845 fixes the serialization issue.
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @DarkLight1337.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
Why has this not been merged? vLLM has no way right now to easily access the underlying model. That is a rather basic feature.
There is an issue with the msgspec serialization that needs to be fixed by @njhill before this PR can be merged.
@DarkLight1337 OK I finally got to this! https://github.com/vllm-project/vllm/pull/25294
@DarkLight1337 are you familiar with errors as: tests/kernels/moe/test_ocp_mx_moe.py::test_mxfp4_loading_and_execution_moe[model_case0] - Exception: Call to collective_rpc method failed: Can't get local object 'test_mxfp4_loading_and_execution_moe.<locals>.check_model'? How should we fix?
e.g. in https://github.com/vllm-project/vllm/blob/d83f3f7cb37a0f1861f16c84d529abcd54889885/tests/kernels/moe/test_mxfp4_moe.py#L63-L79
Try moving the imports inside the inner function
Do you mean QuarkOCP_MX_MoEMethod, QuarkLinearMethod, QuarkOCP_MX imports? It does not seem to work, I'll disable for now
Can you show the full stack trace of the error?