verl icon indicating copy to clipboard operation
verl copied to clipboard

[vllm] fix: ensure AsyncLLM response_length less equal than max_new_tokens in generation_config.json

Open Yangruipis opened this issue 7 months ago • 0 comments

Checklist Before Starting

  • [x] Search for similar PR(s).

What does this PR do?

  • max_tokens is deprecated for vllm openai endpoint, use max_completion_tokens instead.
  • VLLM will use max_new_tokens in generation_config.json as the SERVER's max tokens (see https://github.com/vllm-project/vllm/pull/12242), even if config.response_length is greater than max_new_tokens, the model will still use config in generation_config.json, which may lead to unexpected behaviour(response always stop early). So i'll check when initialization and raise if exceed.

High-Level Design

Demonstrate the high-level design if this PR is complex.

Specific Changes

List the specific changes.

API

Demonstrate how the API changes if any.

Usage Example

Provide usage example(s) for easier usage.

# Add code snippet or script demonstrating how to use this 

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc.

Additional Info.

  • Issue Number: Fixes issue # or discussion # if any.
  • Training: [Note which backend this PR will affect: FSDP, Megatron, both, or none]
  • Inference: [Note which backend this PR will affect: vLLM, SGLang, both, or none]

Checklist Before Submitting

  • [ ] Read the Contribute Guide.
  • [ ] Apply pre-commit checks.
  • [ ] Add [BREAKING] to the PR title if it breaks any API.
  • [ ] Update the documentation about your changes in the docs.
  • [ ] Add CI test(s) if necessary.

Yangruipis avatar May 26 '25 07:05 Yangruipis