dify icon indicating copy to clipboard operation
dify copied to clipboard

Update MistralAI Moonshot AI and Qwen parameters.

Open AAEE86 opened this issue 1 year ago • 4 comments

Checklist:

[!IMPORTANT]
Please review the checklist below before submitting your pull request.

  • [ ] Please open an issue before creating a PR or link to an existing issue
  • [ ] I have performed a self-review of my own code
  • [ ] I have commented my code, particularly in hard-to-understand areas
  • [ ] I ran dev/reformat(backend) and cd web && npx lint-staged(frontend) to appease the lint gods

Description

Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue. Close issue syntax: Fixes #<issue number>, see documentation for more details.

Fixes #8141

Type of Change

  • [ ] Bug fix (non-breaking change which fixes an issue)
  • [ ] New feature (non-breaking change which adds functionality)
  • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ ] This change requires a documentation update, included: Dify Document
  • [ ] Improvement, including but not limited to code refactoring, performance optimization, and UI/UX improvement
  • [ ] Dependency upgrade

Testing Instructions

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • [ ] Test A
  • [ ] Test B

AAEE86 avatar Sep 09 '24 08:09 AAEE86

I don't think it supports 128k output tokens. Output tokens is different from context tokens.max tokens

https://docs.mistral.ai/getting-started/models/ https://mistral.ai/news/mistral-large-2407/ Quicker_20240909_161833 QQ截图20240909162047 I referred to the materials provided officially.

AAEE86 avatar Sep 09 '24 08:09 AAEE86

I don't think it supports 128k output tokens. Output tokens max tokens is different from context tokens.

Yes, but this is the context size from the docs, it did not mention the output tokens.

image

crazywoola avatar Sep 09 '24 08:09 crazywoola

https://docs.mistral.ai/getting-started/models/ The MAX Tokens for this page should be the context size.

I didn't find max output tokens.

I think it can be... 32K context size Set the max token to 4096. 64K context size Set the max token to 8192. 128K context size Set the max token to 16384. 256K context size Set the max token to 32768.

AAEE86 avatar Sep 09 '24 09:09 AAEE86

I don't think it supports 128k output tokens. Output tokens max tokens is different from context tokens.

The modification has been completed. Please review.

AAEE86 avatar Sep 11 '24 02:09 AAEE86