dify
dify copied to clipboard
Error encountered while calling the yi-vl-plus model.
Self Checks
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] Pleas do not modify this template :) and fill in all the required fields.
Dify version
0.5.10
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Every time I call the yi-vl-plus model, I encounter this error. Error log in api docker: tYbi48KancRxuxWUWrvEf8AgSgg/gaz9mjTnP0Gtv2hf2cb7yPs/wASba2zvxLc6dNCn6p/9Y7qK/PHU9bF7YCFLbyEQYx/HRV+x8yz/9n7LkULAAAAACETXvC/cv1W5mmGqHriBEE=', detail=<DETAIL.LOW: 'low'>)] is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/app/api/core/application_manager.py", line 182, in _generate_worker runner.run( File "/app/api/core/app_runner/basic_app_runner.py", line 206, in run self._handle_invoke_result( File "/app/api/core/app_runner/app_runner.py", line 230, in _handle_invoke_result self._handle_invoke_result_stream( File "/app/api/core/app_runner/app_runner.py", line 263, in _handle_invoke_result_stream for result in invoke_result: File "/app/api/core/model_runtime/model_providers/__base/large_language_model.py", line 418, in _invoke_result_generator raise self._transform_invoke_error(e) core.model_runtime.errors.invoke.InvokeError: [yi] Error: Input [TextPromptMessageContent(type=<PromptMessageContentType.TEXT: 'text'>, data='判断图片中是否包括小票或者显示器截屏?\n'), ImagePromptMessageContent(type=<PromptMessageContentType.IMAGE: 'image'>, data='data:image/jpeg;base64,/9j/4AAQS
✔️ Expected Behavior
No response
❌ Actual Behavior
No response
Hey @wurenxiaozu! 😊 Great to see you back here. Hope you've been doing awesome since our last chat!
Thank you for providing detailed information about the issue you're encountering. It seems like the error is occurring when the yi-vl-plus
model is trying to process an image input. The error message suggests that the input format might not be valid.
Based on the information provided in the yi-vl-plus.yaml file, the yi-vl-plus
model expects a specific format for the input. The error message indicates that the model is expecting a string, a list/tuple of strings, or a list/tuple of integers. However, it seems like the input being provided is not matching these expectations.
The error message also suggests that the error is occurring during the handling of the invoke result. According to the invoke.py file, the InvokeError
is raised when there's an error during the invocation process. This could be due to an issue with the input format, as suggested above, or it could be due to other issues such as a problem with the model itself or the server.
The application_manager.py and basic_app_runner.py files suggest that the error might be occurring during the generation of the worker or the running of the app. These files handle the generation and execution of the application, and any issues during these processes could potentially lead to the error you're seeing.
To resolve this issue, I would recommend checking the format of the input you're providing to the yi-vl-plus
model. Ensure that it matches the expected format as described in the yi-vl-plus.yaml
file. If the input format is correct, then the issue might be with the model or the server. In that case, you might need to check the model configuration and the server status.
If you're still encountering the issue after checking these things, could you please provide more information about the input you're providing to the yi-vl-plus
model? This could help in identifying the exact cause of the issue.
Sources
- api/core/model_runtime/model_providers/yi/llm/yi-vl-plus.yaml
- api/core/model_runtime/errors/invoke.py
- api/core/application_manager.py
- api/core/app_runner/basic_app_runner.py
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Please check if it's a bug. Thanks.
@soulteary Can you help me take a look at this issue?
Later on, I'll check it out. The API seemed to be adjusting a few days ago. I think it might be worth writing an article about.
DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False
DEBUG:httpx:load_verify_locations cafile='/usr/local/lib/python3.10/site-packages/certifi/cacert.pem'
DEBUG:openai._base_client:Request options: {'method': 'post', 'url': '/chat/completions', 'files': None, 'json_data': {'messages': [{'role': 'user', 'content': [{'type': 'text', 'text': '判断图片用于判断店铺当日销售情况,请读取图片,如果是日结小票或者日报表截图,请依据小票返回可识别信息,如果无法判断,请返回:\n{\n "ans": "false"\n}\n如果可以判断,请严格按照如下json格式返回:\n{\n "ans": "true",\n "date": "识别出来的日期填入此处,日期格式YYYYMMDD",\n "num": "识别出来的当日销售总金额以元为单位填入数值到此处,注意金额小数点"\n}\n返回结果仅json格式,不需要文字解释。'}, {'type': 'image_url', 'image_url': {'url': 'http://180.184.55.228/env-101/por-501/aiapp/lcap/file/mdpic/b471fd67-4735-411d-91dc-5a8b8b516017/dc73d9f2-b826-4cc2-9dcb-a9ad4f2851db/66010f4d961a202d0eb71d90/20240402/3uf0773C7g0jeNbbfI5Jdn0jbm719x2h3Ree914X8J0Xej7yeg4U0N6Hcu0Zd26y.jpg', 'detail': 'low'}}]}], 'model': 'yi-vl-plus', 'max_tokens': 512, 'stream': True, 'temperature': 0.3, 'top_p': 0.8}}
DEBUG:httpcore.connection:connect_tcp.started host='api.lingyiwanwu.com' port=443 local_address=None timeout=5.0 socket_options=None
DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x7f5887567670>
DEBUG:httpcore.connection:start_tls.started ssl_context=<gevent.ssl.SSLContext object at 0x7f58867060c0> server_hostname='api.lingyiwanwu.com' timeout=5.0
DEBUG:httpcore.connection:start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x7f58859b9090>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'POST']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Tue, 02 Apr 2024 11:03:02 GMT'), (b'Content-Type', b'text/event-stream;charset=UTF-8'), (b'Content-Length', b'0'), (b'Connection', b'keep-alive'), (b'Set-Cookie', b'acw_tc=2760820517120557808798881e49000f8d5fbf10ed809f523d01639d3ade44;path=/;HttpOnly;Max-Age=1800'), (b'eagleeye-traceid', b'2760820517120557808798881e4900'), (b'vary', b'Origin,Access-Control-Request-Method,Access-Control-Request-Headers'), (b'model', b'yi-vl-plus'), (b'req-cost-time', b'1461'), (b'req-arrive-time', b'1712055781022'), (b'resp-start-time', b'1712055782484'), (b'x-envoy-upstream-service-time', b'1461')])
INFO:httpx:HTTP Request: POST https://api.lingyiwanwu.com/v1/chat/completions "HTTP/1.1 200 OK"
DEBUG:openai._base_client:HTTP Request: POST https://api.lingyiwanwu.com/v1/chat/completions "200 OK"
DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_body.complete
DEBUG:httpcore.http11:response_closed.started
DEBUG:httpcore.http11:response_closed.complete
ERROR:core.application_manager:Validation Error when generating
Traceback (most recent call last):
File "/app/api/core/application_manager.py", line 182, in _generate_worker
runner.run(
File "/app/api/core/app_runner/basic_app_runner.py", line 206, in run
self._handle_invoke_result(
File "/app/api/core/app_runner/app_runner.py", line 230, in _handle_invoke_result
self._handle_invoke_result_stream(
File "/app/api/core/app_runner/app_runner.py", line 283, in _handle_invoke_result_stream
llm_result = LLMResult(
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LLMResult
model
none is not an allowed value (type=type_error.none.not_allowed)
DEBUG:core.app_runner.generate_task_pipeline:error: 1 validation error for LLMResult
model
none is not an allowed value (type=type_error.none.not_allowed)
DEBUG:httpcore.connection:close.started
DEBUG:httpcore.connection:close.complete
DEBUG:httpcore.connection:close.started
DEBUG:httpcore.connection:close.complete
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): updates.dify.ai:443
same problem