dify icon indicating copy to clipboard operation
dify copied to clipboard

Images passed to Vision models with low quality parameter, even though high quality was chosen

Open sigvardt opened this issue 1 year ago • 1 comments

Self Checks

  • [X] This is only for bug report, if you would like to ask a quesion, please head to Discussions.
  • [X] I have searched for existing issues search for existing issues, including closed ones.
  • [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [X] Pleas do not modify this template :) and fill in all the required fields.

Dify version

0.6.7

Cloud or Self Hosted

Cloud

Steps to reproduce

  1. Make a workflow with image capabilities and a LLM node with vision capabilities
  2. Ensure LLM node is set to High Quality vision
  3. Publish and run app
  4. Provide image file to the workflow
  5. The request sent to the LLM has the quality parameter set to low, even tough it is defined to be high Tested across models and providers - issue seems to be persistent
image image

✔️ Expected Behavior

  • Defined vision quality settings are reflected in the call the to the LLM endpoint

❌ Actual Behavior

  • Discrepancy between settings and executed call to endpoint

sigvardt avatar May 10 '24 10:05 sigvardt

It appears there's a bug affecting the way the Dify platform handles the vision quality setting for LLM nodes, leading to a discrepancy between the selected 'High' quality in the UI and the 'low' quality parameter being sent in the request. This issue might stem from either the UI component responsible for setting the vision quality or the backend logic that constructs the request to the LLM endpoint.

The relevant code for handling vision quality settings is located in the VisionConfig class within the entities.py file, which includes a detail attribute for setting the quality parameter. The UI component for selecting the vision resolution, found in param-config-content.tsx, uses a RadioGroup component to update the visionConfig.detail state. The transition from UI selection to the request sent to the LLM endpoint is managed by the _generate method in the NVIDIALargeLanguageModel class, which constructs the request payload including the quality parameter.

To address this issue, it's essential to review the synchronization between the UI component's state and the backend request construction logic, particularly how the model_parameters are handled in the _generate method. Checking for any recent changes in these areas or known bugs could also provide insights.

For a more detailed investigation, sharing specific models, providers, and any error messages or logs you're encountering would be beneficial. This information could help pinpoint the exact cause of the discrepancy and facilitate a quicker resolution.

Details

Sources


About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

dosubot[bot] avatar May 10 '24 10:05 dosubot[bot]

I encountered the same issue as well. This problem is still not fixed in Dify 0.6.9. I also tried using both the gpt-4-turbo and gpt-4o models, but the same issue occurs.

This problem is very difficult to notice because it does not produce an error, and it is quite troublesome as it unintentionally reduces the overall accuracy of the workflow (in my case, I only noticed the problem due to the difference in accuracy when using Dify and Vision API).

kei-yamazaki avatar Jun 04 '24 01:06 kei-yamazaki

solve https://github.com/langgenius/dify/pull/5253

eightHundreds avatar Jun 19 '24 11:06 eightHundreds