azure-search-openai-demo
azure-search-openai-demo copied to clipboard
Integrated vectorization is not compatible with GPT-vision feature
Please provide us with the following information:
This issue is for a: (mark with an x)
- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Minimal steps to reproduce
Following the README to setup an environment. Set azd env set USE_GPT4V true and run azd up: AI region: swedencentral No error messages Goto the url given switch to GP4_v in the developer settings click on one of the given questions Error: The app encountered an error processing your request. If you are an administrator of the app, view the full error in the logs. See aka.ms/appservice-logs for more information. Error type: <class 'azure.core.exceptions.HttpResponseError'>
Any log messages given by the failure
(InvalidRequestParameter) Unknown field 'imageEmbedding' in vector field list. Code: InvalidRequestParameter Message: Unknown field 'imageEmbedding' in vector field list. Exception Details: (UnknownField) Unknown field 'imageEmbedding' in vector field list. Code: UnknownField Message: Unknown field 'imageEmbedding' in vector field list.
Expected/desired behavior
OS and Version?
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?) windows 11
azd version?
run
azd versionand copy paste here. azd version 1.6.1 (commit eba2c978b5443fdb002c95add4011d9e63c2e76f)
Versions
Mention any other details that might be useful
Thanks! We'll be in touch soon.
- Was this your first time running azd up? If you ran it previously before enabling GPT-4-V, you'd need to delete the index, so that it could be recreated with the imageEmbedding field.
- Is it possible that you enabled the Integrated Vectorization feature? That is not compatible with GPT-4-V, so it would need to be set to false.
Hi @pamelafox,
We have the same problem. We had to activate the integrated vectorization function in order to successfully build the index with custom documents. Without the feature we are getting the following error:
´´´bash openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 10515 tokens (10515 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}} ´´´
I see in your second point, that the feature is not compatible with gpt-4v. Anything else can we do to overcome this issues or do we have to wait for compatibility? :)
Best regards, Mirco
@mmarahrens That looks like a different error. Have you checked to see what in your request is causing it to go over the token count? The prompt, the search results, the conversation history, etc? Requesting too many or too large search results can cause that error.
Ah, I was unprecise in my description. The error I describe in my comment is an upstream issue, which occurs at deployment of the envrionment with 'azd up', to be more precise at index building, when we had an additional custom pdf-document in the data folder. We overcame this issue by enabling the preview feature integrated vectorization function.
After activation the deployment runs smoothly, but now we are facing
Error: The app encountered an error processing your request.
If you are an administrator of the app, view the full error in the logs. See aka.ms/appservice-logs for more information.
Error type: <class 'azure.core.exceptions.HttpResponseError'>
if activating gpt-4 in the web UI and querying one of the default examples. The custom document is with 6.5M larger then the other example documents.
If you are using gpt-4-vision, you may be experiencing the issues I'm working on in this post: https://github.com/Azure-Samples/azure-search-openai-demo/pull/1355 We weren't doing chunking when vision was enabled, but I'm bringing back chunking. You may be sending too much data to the model due to the text not being chunked. I can't tell from that error, you'll need to consult the logs for the full traceback: https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/docs/appservice.md