Julian Risch
Julian Risch
In Haystack 1.26.x we should replace the `nltk.download("punkt")` with `nltk.download('punkt_tab')` here https://github.com/deepset-ai/haystack/blob/883cd466bd0108ff4f6af4c389f0e42fabc1282c/haystack/nodes/preprocessor/preprocessor.py#L123 so that users can use Haystack 1.26.x with NLTK 3.9. Prior NLTK versions are affected by https://nvd.nist.gov/vuln/detail/CVE-2024-39705. We...
As of July 2024, gpt-4o-mini should be used in place of gpt-3.5-turbo: https://platform.openai.com/docs/models/gpt-3-5-turbo It's the default model of `OpenAIChatGenerator` and `OpenAIGenerator`. We should treat that change of the default behavior...
Haystack release notes are currently added to the Haystack website manually https://haystack.deepset.ai/release-notes We should automate this step so that for example https://haystack.deepset.ai/release-notes/2.3.1 is automatically added to the website when the...
We removed the Multiplexer from Haystack and also its documentation. We should also remove it from the `Others` overview documentation page: https://docs.haystack.deepset.ai/docs/other https://github.com/deepset-ai/haystack/pull/8020
A community member pointed me to outdated code in one of our recipes: https://github.com/deepset-ai/haystack/discussions/8299#discussion-7103259 That particular recipe is fixed now but we should check and update all others too. For...
We need to better understand how complex and difficult to understand tutorials would get if we used ChatGenerator instead of the regular Generators. For that purpose, let's create a version...
"This tutorial uses Haystack 2.0" or "Install Haystack 2.0 with pip:" is mentioned in some tutorials although we install the most recent release, which is 2.3.1 or soon 2.4.0. We...
As of July 2024, gpt-4o-mini should be used in place of gpt-3.5-turbo: https://platform.openai.com/docs/models/gpt-3-5-turbo I found it in a couple of tutorials, for example https://github.com/deepset-ai/haystack-tutorials/blob/13a85a851adde687b713fa085ecf9db18318fc79/tutorials/36_Building_Fallbacks_with_Conditional_Routing.ipynb#L208 The text description of the code...
**Is your feature request related to a problem? Please describe.** Google Vertex AI, in particular the models Gemini 1.5 Flash and Gemini 1.5 Pro support prompt caching or context caching....
We should enable users to use embedding models with the integration, for example text-embedding-004. https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api Came up on discord. Related to https://github.com/deepset-ai/haystack-core-integrations/issues/1530