Update MistralAI Moonshot AI and Qwen parameters.
Checklist:
[!IMPORTANT]
Please review the checklist below before submitting your pull request.
- [ ] Please open an issue before creating a PR or link to an existing issue
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I ran
dev/reformat(backend) andcd web && npx lint-staged(frontend) to appease the lint gods
Description
Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue. Close issue syntax: Fixes #<issue number>, see documentation for more details.
Fixes #8141
Type of Change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update, included: Dify Document
- [ ] Improvement, including but not limited to code refactoring, performance optimization, and UI/UX improvement
- [ ] Dependency upgrade
Testing Instructions
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Test A
- [ ] Test B
I don't think it supports 128k output tokens. Output tokens is different from context tokens.
max tokens
https://docs.mistral.ai/getting-started/models/
https://mistral.ai/news/mistral-large-2407/
I referred to the materials provided officially.
I don't think it supports 128k output tokens. Output tokens
max tokensis different from context tokens.
Yes, but this is the context size from the docs, it did not mention the output tokens.
https://docs.mistral.ai/getting-started/models/ The MAX Tokens for this page should be the context size.
I didn't find max output tokens.
I think it can be... 32K context size Set the max token to 4096. 64K context size Set the max token to 8192. 128K context size Set the max token to 16384. 256K context size Set the max token to 32768.
I don't think it supports 128k output tokens. Output tokens
max tokensis different from context tokens.
The modification has been completed. Please review.