verl
verl copied to clipboard
[ci, megatron] test: Add Qwen3 Megatron+Mindspeed Ascend NPU CI
What does this PR do?
Add Qwen3 Megatron+Mindspeed Ascend NPU CI
Checklist Before Starting
- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data- If this PR involves multiple modules, separate them with
,like[megatron, fsdp, doc] {type}is infeat,fix,refactor,chore,test- If this PR breaks any API (CLI arguments, config, function signature, etc.), add
[BREAKING]to the beginning of the title. - Example:
[BREAKING][fsdp, megatron] feat: dynamic batching
Test
For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.
Checklist Before Submitting
[!IMPORTANT] Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
- [x] Read the Contribute Guide.
- [x] Apply pre-commit checks:
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always - [ ] Add / Update the documentation.
- [x] Add unit or end-to-end test(s) to the CI workflow to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in the
ci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)
[!NOTE] Gemini is unable to generate a review for this pull request due to the file types involved not being currently supported.
Since the existing test scripts all use small models like 0.5B or 0.6B, while the smallest Qwen3-MoE model is 30B, this would significantly increase the runtime when pulling the model. Additionally, network issues could make the CI less stable. May I use a fully dummy model with weights trimmed to approximately 1B instead? @tardis-key
Since the existing test scripts all use small models like 0.5B or 0.6B, while the smallest Qwen3-MoE model is 30B, this would significantly increase the runtime when pulling the model. Additionally, network issues could make the CI less stable. May I use a fully dummy model with weights trimmed to approximately 1B instead? @tardis-key
Using a trimmed model is a good idea. But the current config system requires a path, and I’m not sure if a dummy model will work. If it doesn’t, we can make it happen by uploading the trimmed model to Hugging Face. @wlf-darkmatter
self test OK