vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Bugfix] Initialize attention bias on the same device as Query/Key/Value

Open edwardzjl opened this issue 9 months ago • 2 comments

The attention bias in vLLM's xformers backend is currently initialized on the default device, rather than the device of the Q/K/V tensors:

https://github.com/vllm-project/vllm/blob/b53d79983c273b2775456d99c0e0890aea073512/vllm/attention/backends/xformers.py#L676-L677

And here is how xformers decide which device to use:

https://github.com/facebookresearch/xformers/blob/8d91ce05a2f6a5ae059593922a631b9ff325b134/xformers/ops/fmha/attn_bias.py#L742:

class BlockDiagonalMask(AttentionBias):
    ...
    def from_seqlens(
        cls,
        q_seqlen: Sequence[int],
        kv_seqlen: Optional[Sequence[int]] = None,
        *,
        device: Optional[torch.device] = None,
    ) -> "BlockDiagonalMask":
        ...
        device = _get_default_bias_device(device)

https://github.com/facebookresearch/xformers/blob/8d91ce05a2f6a5ae059593922a631b9ff325b134/xformers/ops/fmha/attn_bias.py#L90

def _get_default_bias_device(device: Optional[torch.device] = None) -> torch.device:
    if device is None:
        if torch.cuda.is_available():
            return torch.device("cuda")
        return torch.device("cpu")
    return device

This becomes problematic when vLLM is used in conjunction with libraries like trl for GRPO training. In such cases, vLLM might be assigned to run on a specific GPU (e.g., the next available GPU after those used for training, which is the default behaviour of trl).

For example, if I have 8 GPUs and use cuda:0 to cuda:6 for GRPO training, vLLM will then be assigned to cuda:7. However, the current attention bias initialization will place the bias on cuda:0, leading to the following error:

[rank0]: ValueError: Attention bias and Query/Key/Value should be on the same device
[rank0]:   query.device: cuda:7
[rank0]:   attn_bias   : cuda:0

This PR will probably solve this issue.

edwardzjl avatar Feb 18 '25 08:02 edwardzjl

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

github-actions[bot] avatar Feb 18 '25 08:02 github-actions[bot]

The pre-commit CI passed once, but failed after I signed off and force-pushed. I'm not sure why.

edwardzjl avatar Feb 18 '25 08:02 edwardzjl

This could solve issues like https://github.com/huggingface/open-r1/issues/278 and https://github.com/facebookresearch/xformers/issues/1064#issuecomment-2641293818

edwardzjl avatar Feb 21 '25 05:02 edwardzjl

using vllm==0.7.3, still having this issue I think its not released yet

dipta007 avatar Feb 25 '25 19:02 dipta007

same question, how to solve it ?

Roxanne527 avatar Mar 04 '25 07:03 Roxanne527

same question, how to solve it ?

You need to either install from the main branch, or wait for a release.

edwardzjl avatar Mar 04 '25 07:03 edwardzjl