azure-docs
azure-docs copied to clipboard
AzureOpenAIEmbeddingSkill not working :- Web Api response status: 'BadRequest', Web Api response details: '{ "error": { "message": "Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.", "type": "invalid_request_error", "param": null, "code": null } } '
I am unable to use the AzureOpenAIEmbeddingSkill in AI Search, to embed (vectorize) a certain node of a JSON file (file containing Array of JSON) and store it in an index search field, using Indexer, as defined below: { "name": "new-indexer", "description": null, "dataSourceName": "bijitgitrag", "skillsetName": "bijit-vector-skillset", "targetIndexName": "new-index", "disabled": null, "schedule": null, "parameters": { "batchSize": null, "maxFailedItems": null, "maxFailedItemsPerBatch": null, "base64EncodeKeys": null, "configuration": { "parsingMode": "jsonArray", "allowSkillsetToReadFileData": false } }, "fieldMappings": [], "outputFieldMappings": [ { "sourceFieldName": "/document/embedding/*", "targetFieldName": "create_vector" } ], "cache": null, "encryptionKey": null }
and the corresponding skillset defined as follows : "skills": [ { "@odata.type": "#Microsoft.Skills.Text.AzureOpenAIEmbeddingSkill", "name": "#1", "description": null, "context": "/document/create_description", "resourceUri": "https://abc.openai.azure.com", "apiKey": "##########", "deploymentId": "ada-text-embeddings-002", "inputs": [ { "name": "text", "source": "/document/create_description" } ], "outputs": [ { "name": "embedding" } ], "authIdentity": null } ]
Document Details
⚠ Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.
- ID: 9a3a375f-47b9-d149-d30f-22684b67de2d
- Version Independent ID: d81aac01-3b68-1828-38d3-5bb770506314
- Content: Azure OpenAI Embedding skill - Azure AI Search
- Content Source: articles/search/cognitive-search-skill-azure-openai-embedding.md
- Service: cognitive-search
- GitHub Login: @dharun1995
- Microsoft Alias: dhanasekars
@BijitDey Thanks for your feedback! We will investigate and update as appropriate.
Hi @BijitDey, from the error, it looks like the problem is with your input definition and also the context doesn't look right. I can't provide tech support on this channel, but are you parceling the document into chunks? You might need to use the split skill first.
I recommend posting to Stack Overflow. I see other posts related to AzureOpenAiEmbedding skill, and while not specifically related to your issue, it looks like the poster's context and inputs are correct, so maybe this post will unblock you: https://stackoverflow.com/questions/78079660/azure-ai-search-azureopenaiembeddingskill-concurrency-conflict-during-operation
Hi @BijitDey, from the error, it looks like the problem is with your input definition and also the context doesn't look right. I can't provide tech support on this channel, but are you parceling the document into chunks? You might need to use the split skill first. I recommend posting to Stack Overflow. I see other posts related to AzureOpenAiEmbedding skill, and while not specifically related to your issue, it looks like the poster's context and inputs are correct, so maybe this post will unblock you: https://stackoverflow.com/questions/78079660/azure-ai-search-azureopenaiembeddingskill-concurrency-conflict-during-operation
Thank you @HeidiSteen, can you be specific as to what is wrong in the input definition and context, so I can understand and make the required rectification.
@BijitDey, I was wrong about context. I thought I saw "create_description" in that path, but I see now that isn't the case. Another thing that caught my eye is that if you're chunking documents, your skill input syntax needs to have an * in the path. Setting all that aside, I did a web search on your error, and it's coming from AzureOpenAI, although OpenAI seems to return a version of that error too.
There are some links at the bottom of the article that offer guidance if you run into token limits. Can you take a look at those for your next step?
#please-close