verl
verl copied to clipboard
verl v0.2.1 & v0.3 release checklist
v0.2.1
- [x] add assertion when
log_prob_micro_batch_sizeis smaller than world_size, and fix the case when "the evaluation dataset size is not divisible by the world_size" https://github.com/volcengine/verl/issues/12#issuecomment-2475353389 - [ ] add an option to remove the call of torch.compile in https://github.com/volcengine/verl/blob/main/verl/workers/actor/dp_actor.py#L56 in case of gcc/nvcc issues https://github.com/volcengine/verl/issues/245#issuecomment-2677172305
- [ ] include the fix for checkpoint fixes in https://github.com/volcengine/verl/issues/250
- [ ] check if https://github.com/volcengine/verl/issues/283 persists (and if so fix it)
- [ ] multi-node training tutorial with
ray starthttps://github.com/volcengine/verl/issues/278 - [x] fix the main_generation example https://github.com/volcengine/verl/issues/349 https://github.com/volcengine/verl/pull/351 https://github.com/volcengine/verl/issues/331
v0.3
feel free to propose features (contributions are welcome!)
- [x] upgrade mcore to v0.6 or v0.11
- [ ] deepseek v3 examples
- [ ] megatron checkpoint supports
- [x] megatron qwen2 support https://github.com/volcengine/verl/pull/261
- [x] https://github.com/volcengine/verl/issues/312
- [x] multimodal (qwen vl) support
- [ ] sglang integration
- [ ] tool calling examples
- [ ] non nvidia gpu support
- [ ] start time optimization
What can I help about the 'tool calling examples' part?
What can I help about the 'tool calling examples' part?
related to: https://github.com/volcengine/verl/issues/344 https://github.com/volcengine/verl/issues/340
under the hood chat calls generate, so the design is supposed to work. just need to provide a working/stable example
Will megatron context parallelism be supported in the future?
Will megatron context parallelism be supported in the future?
Yes. We will use mcore that supports cp by default.
@BearBiscuit05 See #344, I outlined the main challenge. I think it should be relatively straightforward if veRL can start using chat or vLLM directly adds support for tool calling in generate.
I imagine we can have GRPO-trained reasoners in the future that learns when to use tools as part of their <think> tags, e.g. to execute code for a feedback loop or retrieve additional information.
@BearBiscuit05 See #344, I outlined the main challenge. I think it should be relatively straightforward if veRL can start using
chator vLLM directly adds support for tool calling ingenerate.I imagine we can have GRPO-trained reasoners in the future that learns when to use tools as part of their
<think>tags, e.g. to execute code for a feedback loop or retrieve additional information.
I talked to vllm maintainer yesterday. It seems that there should be no blocking if we switch from generate to chat. Do you mind give it a try to call chat using SPMD style offline inference?
Not very familiar with inference, but I think I’m starting to get the hang of it. Does this mean I need to build a new chat function and add extra params that include tool calls to invoke generate? Or should I just replace generate directly with the chat function from vllm?
You should be able to replace generate directly with chat. The only problem is that we currently pass tokenized inputs into generate where as chat expects List[ChatCompletionContentPartTextParam] or List[List[ChatCompletionContentPartTextParam]]. Not sure what the best design would be in this case.
Case 1: Detokenize the tokenized inputs we use for generate.
Case 2: Change veRL to not tokenize datasets before-hand (relatively big change)
class ChatCompletionContentPartTextParam(TypedDict, total=False):
text: Required[str]
"""The text content."""
type: Required[Literal["text"]]
"""The type of the content part."""
The second choice would incur significant overhead when tokenizing on-the-fly (typically 2x slowdown in generation, which is basically unacceptable). I guess we will need to seek solution for case 1
Got it. I'll give it a try.
未来会支持 megatron 上下文并行吗?
是的。我们将默认使用支持cp的mcore。
It seems that the context parallelism in the model part has not been implemented yet. Is this function currently available?
未来会支持 megatron 上下文并行吗?
是的。我们将默认使用支持cp的mcore。
It seems that the context parallelism in the model part has not been implemented yet. Is this function currently available?
Not right now, but if you check this roadmap, once verl upgrades MCore, cp will be support.
Is it possible to optimize startup time? I noticed when using veRL, it is significantly slower to launch a job than when using Huggingface TRL https://github.com/volcengine/verl/issues/384
Disabling torch.compile is useful, as it can also hang PPO training when enabling use_remove_padding. #387
Disabling torch.compile is useful, as it can also hang PPO training when enabling use_remove_padding. #387
@maksimstw thanks for the feedback! Would you like to provide a PR with this option?
when will you release the "sglang integration" part?
v0.2.1
- [x] add assertion when
log_prob_micro_batch_sizeis smaller than world_size, and fix the case when "the evaluation dataset size is not divisible by the world_size" Hangs during vllm rollout, no error message #12 (comment)[ ] add an option to remove the call of torch.compile in https://github.com/volcengine/verl/blob/main/verl/workers/actor/dp_actor.py#L56 in case of gcc/nvcc issues Having issues with vLLM for GRPO #245 (comment)[ ] include the fix for checkpoint fixes in Load checkpoint from default_local_dir & Save hdfs checkpoints #250[ ] check if Quickstart PPO training error #283 persists (and if so fix it)[ ] multi-node training tutorial withray startAdd instructions on how to run verl on multi-node #278 doc: add multinode training and dbeug tutorial #585[x] fix the main_generation example main_generation seems broken #349 [fix] Improve the params template for generation #351 Tried to run main_generation.py, but it raised KeyError: ConfigAttributeError: Key 'actor' is not in struct. #331v0.3
feel free to propose features (contributions are welcome!)
- [x] upgrade mcore to v0.6 or v0.11[ ] deepseek v3 examples[ ] megatron checkpoint supports[x] megatron qwen2 support [megatron] feat: support qwen2 megatron backend #261[x] sequence parallel optimization for latest transformer #312[x] multimodal (qwen vl) support[ ] sglang integration[ ] tool calling examples[ ] non nvidia gpu support https://github.com/volcengine/verl/pull/360/files[ ] start time optimization[x] prime recipe
how to install v0.3
add an option to remove the call of torch.compile
Item solved in #554
hi @JarvisFei , v0.3 is not fully released but you are welcome to try verl main branch with pip install -e with the source code
As we are already making quite some progress in the main branch, I suggest we freeze code week for v0.3 and push the rest of the features to v0.4
Moving discussions to https://github.com/volcengine/verl/issues/710