verl icon indicating copy to clipboard operation
verl copied to clipboard

GB200 support

Open koceja opened this issue 3 months ago • 6 comments

I was wondering if there are plans to support gb200s. Right now all of the packages and docker materials are for linux x86, but the GB200s are aarch64. I have tried to set this up myself, but I am running into much trouble. Would it be possible to get a working setup script for this?

koceja avatar Sep 14 '25 02:09 koceja

I doubt Pytorch already supports aarch64 wheel?

vermouth1992 avatar Sep 16 '25 01:09 vermouth1992

https://download.pytorch.org/whl/cu128 has the aarch64 cuda wheels for torch (I don't think pypi has the non-cpu version). I think both SGLang and vLLM support gb200s so it should be possible? I just have been running into many issues on my own installation, it would be really helpful to have a script or something to use.

koceja avatar Sep 19 '25 21:09 koceja

I spent lots of efforts on this issue as well. vLLM releases the latest v.10.2 that supports ARM, but without flash-attention, Nvidia latest NGC containers support flash-attention, but I can't make it compatible with vLLM. Wish someone can successfully release a working wheel.

siriluo avatar Sep 21 '25 22:09 siriluo

Does anyone know if gb200 support is part of Verl timeline?

koceja avatar Sep 27 '25 19:09 koceja

I am also interested in the timeline.

schuups avatar Oct 22 '25 10:10 schuups

I am WIP with supporting GB200 now

ISEEKYAN avatar Nov 12 '25 10:11 ISEEKYAN