chore: Extended BaseMessage to support reasoning fields and updated usage accounting in streaming mode
Description
Describe your changes in detail (optional if the linked issue already contains a detailed description of the changes).
- Reasoning Support: Some providers now emit reasoning_content alongside
assistant text (and even during streaming). We plumb that trace directly
into BaseMessage so both batch and streaming callers can reliably access the
thinking process without scraping metadata.
- Streaming Usage Handling: Different APIs interleave usage chunks with choice deltas in inconsistent ways. We updated the stream processors to apply usage updates regardless of whether choices are present, ensuring final token accounting and completion emission work across providers like OpenAI, Gemini, and Qwen.
Checklist
Go over all the following points, and put an x in all the boxes that apply.
- [ ] I have read the CONTRIBUTION guide (required)
- [ ] I have linked this PR to an issue using the Development section on the right sidebar or by adding
Fixes #issue-numberin the PR description (required) - [ ] I have checked if any dependencies need to be added or updated in
pyproject.tomlanduv lock - [ ] I have updated the tests accordingly (required for a bug fix or a new feature)
- [ ] I have updated the documentation if needed:
- [ ] I have added examples if this is a new feature
If you are unsure about any of these, don't hesitate to ask. We are here to help!
[!IMPORTANT]
Review skipped
Auto reviews are disabled on this repository.
Please check the settings in the CodeRabbit UI or the
.coderabbit.yamlfile in this repository. To trigger a single review, invoke the@coderabbitai reviewcommand.You can disable this status message by setting the
reviews.review_statustofalsein the CodeRabbit configuration file.
✨ Finishing touches
🧪 Generate unit tests (beta)
- [ ] Create PR with unit tests
- [ ] Post copyable unit tests in a comment
- [ ] Commit unit tests in branch
st_fix_deepseek_stream
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
updated cc @Wendong-Fan
@Wendong-Fan hi,based on our previous disscussion ,we should hanlde thinking content in model level. however ,after further further consideration, i realized that if we want to keep the original response structure,for example, the api returns results that separate thinking and conent,then we should this structure at model level. Once we retain this strcuture at model level, we inevitably need a set of processing logic in the chatagent. therefor , i will continue in the existing manner.
LGTM @fengju0213! Could you provide me a model that supports the reasoning_content field for testing purposes?”
@Saedbhati thanks for review!you can use deepseek-reasoner to test it, if you need apikey, you can send me a message in slack
Hey @Saedbhati, have you run the example files shall I go ahead and merge
Hey @Saedbhati, have you run the example files shall I go ahead and merge
hi @waleedalzarooni the key I gave you should be sufficient to run this example, right? Feel free to message me if you have any questions.
@waleedalzarooni Go head and merge LGTM!