hsm207
hsm207
> one comment, otherwise think it looks good. @hsm207 @hwchase17 do we think this resolves #4742 @dev2049 yes, this resolves #4742
Just encountered #6021 but only for some projects in a solution. I'm on version 2.22.5 and everytime I reload the window, the C# output pane shows this: ``` Using dotnet...
@francip what do you think of this change?
@ashemag @manubamba we recently updated that [notebook example.](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html) You don't actually need to set up weaviate with a vectorizer if using using it langchain because you can use the `embeddings`...
Tried a smaller model but this time in segfaults when loading i.e.: ``` from langchain.llms import GPT4All # Instantiate the model. Callbacks support token-wise streaming model = GPT4All(model="/workspaces/models/ggml-gpt4all-j-v1.1-breezy.bin") ```
@imeckr yes, that's the way to do it now. But I think a better user experience would be something like: ```python retriever = WeaviateHybridSearchRetriever( client, index_name="Document", text_key="text", vectorizer="cohere" ) ```...
@jacobhutchinson agree. could you open a separate feature request to we it is easier to keep track of the areas that need to be improved in the hybrid search
@jacobhutchinson In oder to use any of weaviate's vectorizer modules, the user needs to do 2 things: 1. set up the module e.g. specify the API key (OpenAI), run the...
if the team agrees with the suggestion, then the code change is to rewrite similarity_search so that it embeds the query using the embedding model the class was initialised with...
> can we have a way to toggle between the two? eg keep both functionality and let the user decide? @hwchase17 sure. PR #4365 supports this requirement. We can close...