dify icon indicating copy to clipboard operation
dify copied to clipboard

In Workflow mode the LLM node response error

Open hiwuye opened this issue 9 months ago • 1 comments

Self Checks

  • [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
  • [x] I have searched for existing issues search for existing issues, including closed ones.
  • [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [x] Please do not modify this template :) and fill in all the required fields.

Dify version

1.1.0

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

In Workflow mode the LLM node response error:

Image In offline environments, Dify 0.15.3 does not have this issue, but versions 1.0.0, 1.0.1, and 1.1.0 all exhibit this problem. It is likely caused by Dify upgrading its tiktoken-related components without implementing local caching. My current workaround is manually caching the relevant BPE and JSON files. Please address the usage issues in offline environments by enabling local caching. Reference: How to use tiktoken in offline mode?

✔️ Expected Behavior

No response

❌ Actual Behavior

No response

hiwuye avatar Mar 19 '25 01:03 hiwuye

To resolve the ConnectionError related to hostname resolution in Workflow mode for Dify versions 1.0.0, 1.0.1, and 1.1.0 in offline environments, you can enable local caching for tiktoken-related components by following these steps:

  1. Download the required tokenizer files:

    mkdir tiktoken
    cd tiktoken
    wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe
    cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee
    wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json
    cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6
    
  2. Modify the openai_public.py file to use local files instead of downloading them. The file is located at:

    • For Ollama: /App/storage/cwd/langgenius/ollama....../.venv/lib/site-packages/tiktoken_ext/openai_public.py
    • For Xinference: /App/storage/cwd/langgenius/xinference....../.venv/lib/site-packages/tiktoken_ext/openai_public.py
  3. Configure the Docker environment by adding these settings to your docker-compose file:

    environment:
      TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/
    volumes:
      - ./tiktoken:/app/api/.tiktoken
    

This setup allows tiktoken to use locally cached tokenizer files, thus resolving the ConnectionError by avoiding the need to download them from Azure Blob Storage [1][2][3].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

dosubot[bot] avatar Mar 19 '25 01:03 dosubot[bot]