Uploading pictures in the workflow seems to get an error
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
1.1.1
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
✔️ Expected Behavior
Normal output
❌ Actual Behavior
Run failed: [openai] Error: PluginInvokeError: {"args":{},"error_type":"ValidationError","message":"1 validation error for ModelInvokeLLMRequest\nprompt_messages.content.type\n Input should be \u003cPromptMessageContentType.TEXT: 'text'\u003e [type=literal_error, input_value='text', input_type=str]\n For further information visit
Hello @xingxi521! I'm here to help you with any bugs, questions, or contributions you have for the Dify project. I'm a bot designed to assist you while you wait for a human maintainer.
The error you're encountering seems to be related to a validation issue with the input type for the ModelInvokeLLMRequest. The error message indicates that the input should be of type PromptMessageContentType.TEXT. This suggests that there might be a mismatch between the expected input type and the actual input provided in the workflow.
To resolve this issue, ensure that the input being passed to the LLM node is correctly formatted as a PromptMessage with content type TEXT. You might need to check the configuration of the input node and ensure that it aligns with the expected input type for the gpt-4-turbo model.
Additionally, you might find it helpful to review the documentation on handling image inputs in workflows, as it provides guidance on setting up image processing nodes and ensuring the correct input types are used [1][2].
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
The same problem
same to me
any help?
I'm encountering the same issue. When passing an image variable from an upstream node to an LLM node in Dify workflow, I get this validation error:
[openai] Error: PluginInvokeError: {"args":{},"error_type":"ValidationError","message":"1 validation error for ModelInvokeLLMRequest\nprompt_messages.content.type\n Input should be <PromptMessageContentType.IMAGE: 'image'> [type=literal_error, input_value='image', input_type=str]"}
The strange part is this exact JSON structure worked before:
{ "#files#": [ { "dify_model_identity": "__dify__file__", "type": "image", "transfer_method": "remote_url", "remote_url": "...", "mime_type": "image/jpeg" } ] }
It seems Pydantic now expects an enum value (PromptMessageContentType.IMAGE) instead of the string "image". Has there been a recent change in how image variables should be handled?
Same problem😭.... The LLM was able to accept and understand images last Friday, but with the same workflow today it doesn’t run and gives the same error as you reported.
same with me, in my test just openai's plugin trigger this issue:
api-1 | 2025-03-24 11:37:29.205 ERROR [Thread-20 (_generate_worker)] [app_generator.py:246] - Unknown Error when generating
api-1 | Traceback (most recent call last):
api-1 | File "/app/api/core/app/apps/chat/app_generator.py", line 226, in _generate_worker
api-1 | runner.run(
api-1 | File "/app/api/core/app/apps/chat/app_runner.py", line 69, in run
api-1 | self.get_pre_calculate_rest_tokens(
api-1 | File "/app/api/core/app/apps/base_app_runner.py", line 90, in get_pre_calculate_rest_tokens
api-1 | prompt_tokens = model_instance.get_llm_num_tokens(prompt_messages)
api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-1 | File "/app/api/core/model_manager.py", line 195, in get_llm_num_tokens
api-1 | self._round_robin_invoke(
api-1 | File "/app/api/core/model_manager.py", line 370, in _round_robin_invoke
api-1 | return function(*args, **kwargs)
api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
api-1 | File "/app/api/core/model_runtime/model_providers/__base/large_language_model.py", line 299, in get_num_tokens
api-1 | return plugin_model_manager.get_llm_num_tokens(
api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-1 | File "/app/api/core/plugin/manager/model.py", line 231, in get_llm_num_tokens
api-1 | for resp in response:
api-1 | ^^^^^^^^
api-1 | File "/app/api/core/plugin/manager/base.py", line 189, in _request_with_plugin_daemon_response_stream
api-1 | self._handle_plugin_daemon_error(error.error_type, error.message)
api-1 | File "/app/api/core/plugin/manager/base.py", line 221, in _handle_plugin_daemon_error
api-1 | raise PluginInvokeError(description=message)
api-1 | core.plugin.manager.exc.PluginInvokeError: PluginInvokeError: {"args":{},"error_type":"ValidationError","message":"1 validation error for ModelGetLLMNumTokens\nprompt_messages.content.type\n Input should be \u003cPromptMessageContentType.TEXT: 'text'\u003e [type=literal_error, input_value='text', input_type=str]\n For further information visit https://errors.pydantic.dev/2.8/v/literal_error"}
api-1 | 2025-03-24 11:37:29.221 ERROR [Dummy-21] [base_app_generate_response_converter.py:123] - {"args":{},"error_type":"ValidationError","message":"1 validation error for ModelGetLLMNumTokens\nprompt_messages.content.type\n Input should be \u003cPromptMessageContentType.TEXT: 'text'\u003e [type=literal_error, input_value='text', input_type=str]\n For further information visit https://errors.pydantic.dev/2.8/v/literal_error"}
Besides, i'm building my own plugin package, same problem
same problem
same problem
same problem
same problem in v 1.1.3
Same problem too.
~~I don't know how to fix it, but I believe this is the problem.~~ ~~Sorry if I am wrong.~~
~~[dify]~~
if content.type == PromptMessageContentType.TEXT
else "[image]"
if content.type == PromptMessageContentType.IMAGE
else "[file]"
~~https://github.com/langgenius/dify/blob/d87d66ab88d4c794c1bed3bcb5fa881070ce7472/api/core/agent/fc_agent_runner.py#L438C1-L441C42~~
~~[dify-official-plugins azure_openai]~~
if item.type == PromptMessageContentType.TEXT
else "[IMAGE]"
if item.type == PromptMessageContentType.IMAGE
else ""
~~https://github.com/langgenius/dify-official-plugins/blob/cc0defe39607cb9739aed89f0428b7fc885a32c4/models/azure_openai/models/llm/llm.py#L413-L416~~
~~[dify-official-plugins openai]~~
if item.type == PromptMessageContentType.TEXT
else "[IMAGE]"
if item.type == PromptMessageContentType.IMAGE
else ""
~~https://github.com/langgenius/dify-official-plugins/blob/cc0defe39607cb9739aed89f0428b7fc885a32c4/models/openai/models/llm/llm.py#L1086-L1089~~
https://github.com/langgenius/dify/issues/16816
same issues with me .... waiting for goods news.
Is there any progress?
@laipz8200 @crazywoola @Yeuoly I made a quick investigation and can confirm this issue is introduced in the SDK 0.0.1b74.
After the release of version 0.0.1b74, this issue occurs with the model plugins that have dependencies with the latest dify_plugin. For example, OpenAI has dify_plugin~=0.0.1b66 in its requirements.txt, which means that the latest compatible version will be installed at the time of installation, causing 0.0.1b74 to be installed.
In version 0.0.1b74, prompts of this kind with only text can be handled correctly,
"prompt_messages": [
{
"role": "user",
"content": "hello",
"name": null
}
],
but it seems that type checking results in an error with multimodal prompt messages like the following.
"prompt_messages": [
{
"role": "user",
"content": [
{
"type": "text",
"data": "hello with image"
},
{
"type": "image",
"format": "png",
"base64_data": "iVBORw0...(omit)...TkSuQmCC",
"url": "",
"mime_type": "image/png",
"detail": "high"
}
],
The error occurs in get_llm_num_tokens here:
https://github.com/langgenius/dify-plugin-sdks/blob/0.0.1-beta.48/python/dify_plugin/core/plugin_executor.py#L147
I didn't make any further investigation, but perhaps this issue was introduced by https://github.com/langgenius/dify-plugin-sdks/pull/49 by @laipz8200.
I have sent PRs to pin the SDK version up to 0.0.1b73 as a quick workaround.
- https://github.com/langgenius/dify-official-plugins/pull/568
- https://github.com/langgenius/dify-official-plugins/pull/569
- https://github.com/langgenius/dify-official-plugins/pull/572
@kurokobo Thanks for your help! In my case, I downloaded and installed the version 0.0.8 package from the marketplace and the above advice temporarily solved the problem.
Dify: v1.0.1
Temporary workarounds have been released. For those facing this issue, upgrade your plugin and give it another try :)
- Azure OpenAI: >= 0.0.11
- OpenAI: >= 0.0.14
- OpenRouter: >=0.0.6
@kurokobo Hi, Thanks a bunch! I’m working on fixing this issue in https://github.com/langgenius/dify-plugin-sdks/pull/73. Would you mind checking it out?
Temporary workarounds have been released. For those facing this issue, upgrade your plugin and give it another try :)
- Azure OpenAI: >= 0.0.11
- OpenAI: >= 0.0.14
- OpenRouter: >=0.0.6
Hi, same question happenned with volengine model provider plugin. So what should I do to solve this problem! Thanks so much
Temporary workarounds have been released. For those facing this issue, upgrade your plugin and give it another try :)
- Azure OpenAI: >= 0.0.11
- OpenAI: >= 0.0.14
- OpenRouter: >=0.0.6
Hi, same question happenned with volengine model provider plugin. So what should I do to solve this problem! Thanks so much
me too
Temporary workarounds have been released. For those facing this issue, upgrade your plugin and give it another try :)
- Azure OpenAI: >= 0.0.11
- OpenAI: >= 0.0.14
- OpenRouter: >=0.0.6
Hi, same question happenned with volengine model provider plugin. So what should I do to solve this problem! Thanks so much
me too
Hi, I solved this problem now, feel free to reaching out to me if you need
Temporary workarounds have been released. For those facing this issue, upgrade your plugin and give it another try :)
- Azure OpenAI: >= 0.0.11
- OpenAI: >= 0.0.14
- OpenRouter: >=0.0.6
Hi, same question happenned with volengine model provider plugin. So what should I do to solve this problem! Thanks so much
me too
Hi, I solved this problem now, feel free to reaching out to me if you need
怎么解决的?麻烦告诉下我
At this stage, the issue has been identified as belonging to the dify_plugin SDK. The official team has not yet pushed the SDK to PyPI. For now, plugins encountering issues, you can manually modify the requirements.txt file in the plugin by changing:
dify_plugin~=xxx
to:
dify_plugin~=xxx,<0.0.1b74
Then resolve it by installing locally.
At this stage, the issue has been identified as belonging to the
dify_pluginSDK. The official team has not yet pushed the SDK to PyPI. For now, plugins encountering issues, you can manually modify therequirements.txtfile in the plugin by changing:dify_plugin~=xxxto:dify_plugin~=xxx,<0.0.1b74Then resolve it by installing locally.
VLLM plugin and OpenAI-API-compatible plugin still error :
Run failed: [vllm] Error: PluginInvokeError: {"args":{},"error_type":"ValidationError","message":"1 validation error for ModelInvokeLLMRequest\nprompt_messages.content.type\n Input should be \u003cPromptMessageContentType.TEXT: 'text'\u003e [type=literal_error, input_value='text', input_type=str]\n For further information visit [https://errors.pydantic.dev/2.10/v/literal_error"}](https://errors.pydantic.dev/2.10/v/literal_error%22%7D)
Run failed: [openai_api_compatible] Error: PluginInvokeError: {"args":{"description":"[models] Error: API request failed with status code 500: Internal Server Error"},"error_type":"InvokeError","message":"[models] Error: API request failed with status code 500: Internal Server Error"}
所以应该怎么解决?我用的是zhipuai。