haystack-core-integrations
haystack-core-integrations copied to clipboard
Fix: max_tokens typo in Mistral Chat
ALLOWED_PARAMS has max_tokens but generation_kwargs was using max_gen_len.
Closes Issue : https://github.com/deepset-ai/haystack-core-integrations/issues/741
note: AWS-based tests are expected to fail when running from forks (to be fixed)
Hello Vishal @vish9812 and thank you for opening this pull request. Let me try it out locally once and then I will take care of merging this and releasing a new version of the amazon-bedrock-haystack PyPI package.
@julian-risch I only tested the max_tokens and temperature, and was able to run inference successfully. But you're right ALLOWED_PARAMS don't match the suggested params in AWS.
{
"prompt": string,
"max_tokens" : int,
"stop" : [string],
"temperature": float,
"top_p": float,
"top_k": int
}
Also, the docs can be updated to mention that Mistral model is also supported.