vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[Misc] Update reasoning with stream example to use OpenAI library

Open liuyanyi opened this issue 9 months ago • 3 comments

Stream with reasoning model can use OpenAI client with extra check for delta.

like:

    # Check the content is reasoning_content or content
    if hasattr(chunk.choices[0].delta, "reasoning_content"):
        reasoning_content = chunk.choices[0].delta.reasoning_content
    elif hasattr(chunk.choices[0].delta, "content"):
        content = chunk.choices[0].delta.content

Reference: https://help.aliyun.com/zh/model-studio/developer-reference/deepseek#62c72012bc2sw

liuyanyi avatar Mar 01 '25 10:03 liuyanyi

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

github-actions[bot] avatar Mar 01 '25 10:03 github-actions[bot]

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @liuyanyi.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar Mar 03 '25 05:03 mergify[bot]

Just a warn for clarity:

In this example, both reasoning_content and content may eval to False as non-special token can be represented as a length 0 string (e.g. when it constitutes an emoji). This happens frequently for deepseek-r1 as it spits emojis for the majority of responses (even for a humble "Hi" query). I think it is better to explicitly check whether reasoning_content or content is None, otherwise some token stream does not print.

cjackal avatar Mar 03 '25 14:03 cjackal

@cjackal Could you please give it another review?

gaocegege avatar Mar 06 '25 02:03 gaocegege

@gaocegege LGTM, thank you!

cjackal avatar Mar 06 '25 09:03 cjackal

@DarkLight1337 Could you please help review this?

gaocegege avatar Mar 06 '25 12:03 gaocegege