dify icon indicating copy to clipboard operation
dify copied to clipboard

new document embedding error, PluginlnvokeError: {"args":0,"error_type":"HTTPStatusError","message":"Client error '413Payload Too Large' for url'http://xx.xxx.xx.xx:xxxx/tokenize\nFor more information

Open Thunderinmoonlight opened this issue 9 months ago • 4 comments

Self Checks

  • [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
  • [x] I have searched for existing issues search for existing issues, including closed ones.
  • [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [x] Please do not modify this template :) and fill in all the required fields.

Dify version

1.0.0

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

I use TEI for embeding inference,An error is reported in the new document embedding,Documents on completed embedding can be recalled

worker-1         | 2025-03-05 16:46:30.130 ERROR [Dummy-12] [indexing_runner.py:96] - consume document failed
worker-1         | Traceback (most recent call last):
worker-1         |   File "/app/api/core/indexing_runner.py", line 73, in run
worker-1         |     documents = self._transform(
worker-1         |                 ^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/indexing_runner.py", line 706, in _transform
worker-1         |     documents = index_processor.transform(
worker-1         |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/rag/index_processor/processor/paragraph_index_processor.py", line 58, in transform
worker-1         |     document_nodes = splitter.split_documents([document])
worker-1         |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/rag/splitter/text_splitter.py", line 96, in split_documents
worker-1         |     return self.create_documents(texts, metadatas=metadatas)
worker-1         |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/rag/splitter/text_splitter.py", line 81, in create_documents
worker-1         |     for chunk in self.split_text(text):
worker-1         |                  ^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/rag/splitter/fixed_text_splitter.py", line 71, in split_text
worker-1         |     final_chunks.extend(self.recursive_split_text(chunk))
worker-1         |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/rag/splitter/fixed_text_splitter.py", line 97, in recursive_split_text
worker-1         |     s_lens = self._length_function(splits)
worker-1         |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/rag/splitter/fixed_text_splitter.py", line 38, in _token_encoder
worker-1         |     return embedding_model_instance.get_text_embedding_num_tokens(texts=texts)
worker-1         |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/model_manager.py", line 244, in get_text_embedding_num_tokens
worker-1         |     self._round_robin_invoke(
worker-1         |   File "/app/api/core/model_manager.py", line 370, in _round_robin_invoke
worker-1         |     return function(*args, **kwargs)
worker-1         |            ^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/model_runtime/model_providers/__base/text_embedding_model.py", line 65, in get_num_tokens
worker-1         |     return plugin_model_manager.get_text_embedding_num_tokens(
worker-1         |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1         |   File "/app/api/core/plugin/manager/model.py", line 313, in get_text_embedding_num_tokens
worker-1         |     for resp in response:
worker-1         |                 ^^^^^^^^
worker-1         |   File "/app/api/core/plugin/manager/base.py", line 189, in _request_with_plugin_daemon_response_stream
worker-1         |     self._handle_plugin_daemon_error(error.error_type, error.message)
worker-1         |   File "/app/api/core/plugin/manager/base.py", line 221, in _handle_plugin_daemon_error
worker-1         |     raise PluginInvokeError(description=message)
worker-1         | core.plugin.manager.exc.PluginInvokeError: PluginInvokeError: {"args":{},"error_type":"HTTPStatusError","message":"Client error '413 Payload Too Large' for url 'http://xxxxx:xxx/tokenize'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/413"}

Image

✔️ Expected Behavior

No response

❌ Actual Behavior

No response

Thunderinmoonlight avatar Mar 05 '25 17:03 Thunderinmoonlight

Hey there, @Thunderinmoonlight! I'm here to assist you with any bugs, questions, or contributions while you wait for a human maintainer. Let's get this sorted out together!

The error you're encountering, "413 Payload Too Large," typically indicates that the request size exceeds the server's configured limits. A similar issue was discussed in a closed issue where the solution involved adjusting the server's configuration settings. Here are the steps you can take to resolve this:

  1. Check UPLOAD_FILE_SIZE_LIMIT: Ensure that the UPLOAD_FILE_SIZE_LIMIT in your Dify configuration is set to a value larger than the file you are trying to upload.

  2. Update Nginx Configuration: Modify the client_max_body_size parameter in the Nginx configuration to match the desired upload limit. This can be done by setting the NGINX_CLIENT_MAX_BODY_SIZE environment variable in your Docker setup.

    Example:

    NGINX_CLIENT_MAX_BODY_SIZE=50M
    

    Update the docker-compose.yaml file:

    environment:
      NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-50M}
    

    Ensure the Nginx configuration template reflects this change:

    client_max_body_size ${NGINX_CLIENT_MAX_BODY_SIZE};
    
  3. Rebuild and Restart Docker Containers: After making these changes, rebuild and restart your Docker containers:

    docker-compose down
    docker-compose up --build -d
    

These steps should help resolve the "413 Payload Too Large" error and allow you to upload files successfully [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

dosubot[bot] avatar Mar 05 '25 17:03 dosubot[bot]

I tried this method and still made the same mistake。

RongRongStudio avatar Mar 06 '25 02:03 RongRongStudio

I have the same problem @dosu

Babige avatar Mar 08 '25 10:03 Babige

me too, hugginfgace text embedding container is saying:

a lot of ERROR tokenize: text_embeddings_router::http::server: router/src/http/server.rs:1361: batch size 213 > maximum allowed batch size 128

and 2025-03-16T08:29:28.742139Z ERROR tokenize:tokenize{add_special_tokens=true prompt_name=None}: text_embeddings_core::infer: core/src/infer.rs:71: Input validation error: inputs cannot be empty

with this container run for huggingface text embedding container: docker run --gpus all -e HF_API_TOKEN=hf_token -p 1080:80 -v /mnt/docker2/huggingface/config:/data --pull always ghcr.io/huggingface/text-embeddings-inference:1.6 --model-id intfloat/multilingual-e5-large-instruct -max-client-batch-size 4096 --auto-truncate

the embedding is working with small documents, but with huge documents >100 pages it´s failing, even though set the parameters upload size etc. as requested

mamema avatar Mar 16 '25 12:03 mamema

you can use openai-api-comp plugin for Temporary solution ,i have give a pr for tei plugin to fix it

Image

hikariming avatar Apr 07 '25 09:04 hikariming