vllm
vllm copied to clipboard
[Bugfix][Ray] Set the cuda context eagerly in the ray worker
This PR sets the cuda context eagerly in the ray actor (torch.cuda.set_device is actually lazy I think)
There was a bug in Ray + P/D where the nixl/ucx cuda context is checked via direct cuda devices and it fails because Ray runs the execution of the model in a background thread which does not inherit the cuda context.
[!WARNING] You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
Cool, I think I addressed all the comments. Plz take a look again @youkaichao @ruisearch42
PS: I think the failed tests are not relevant?
PS: new tests pass: