Azure connection for embedding models broken
Hey,
I'm using OpenAI API from Azure. It seems that querying index with LangChain OpenAIEmbeddings model is broken. I have model for embedding documents and model for querying, and I'm specifying them with OpenAIEmbeddings. Direct invocation of OpenAIEmbedding.get_query_embedding and OpenAIEmbedding.get_text_embedding works, so the problem is at GPTListIndex.query. Error message is
InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
but neither deployment_id nor engine can be passed to query, and it shouldn't be. I suppose query should use query_model_name and open.Embeddings API, but it's trying to call openai.Completion.
index = GPTListIndex.. # init list index
em_model = LangchainEmbedding(OpenAIEmbeddings(document_model_name='text-search-ada-doc-001', query_model_name='text-search-ada-query-001'))
index.query(
"<query_text>",
mode="embedding",
embed_model=em_model
)
Hey @TeemuSo, the index.query call actually calls 2 APIs under the hood: 1) the embeddings API to embed and retrieve relevant documents, and 2) the completion API for synthesizing final response from retrieved documents.
In order to change configure 2), you can pass a custom llm_predictor when constructing the index.
Let me know if this addresses your question/issue.
It was my lack of knowledge, now it works. Thank you :)