After upgrading to v1.0.0, it is unable to have normal conversations and perform embeddings
Self Checks
- [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [x] Please do not modify this template :) and fill in all the required fields.
Dify version
v1.0.0
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Deployment method: Docker deployment Model provider: Xinference v1.3.0.post2 LLM model: deepseek-r1-distill-qwen-32b, Int4 Embedding model: bge-larger-zh-v1.5 Reranker model: bge-reranker-large
The previous Dify version was v0.15.3 The upgrade process was normal, and there were no abnormal error messages in dify-api after the plugins were installed. The abnormalities are as follows:
- When creating a conversation, the conversation cannot be carried out normally, and dify-api reports an error.
- When uploading knowledge base documents, the embedding cannot be carried out normally, and the error message in dify-api is the same as that during the conversation. The error message of dify-api is as follows:
2025-02-28 16:37:42.914 DEBUG [Thread-2331 (process_request_thread)] [connectionpool.py:241] - Starting new HTTP connection (1): plugin_daemon:5002
2025-02-28 16:37:42.919 DEBUG [Thread-2331 (process_request_thread)] [connectionpool.py:544] - http://plugin_daemon:5002 "POST /plugin/6e269332-9839-43C4-B021-7fc860d5b67a/dispatch/model/schema HTTP/1.1" 200 None
2025-02-28 16:37:42.933 DEBUG [Thread-2331 (process_request_thread)] [connectionpool.py:241] - Starting new HTTP connection (1): plugin_daemon:5002
2025-02-28 16:37:42.933 DEBUG [Thread-2331 (process_request_thread)] [connectionpool.py:544] - http://plugin_daemon:5002 "POST /plugin/6e269332-9839-43C4-B021-7fc860d5b67a/dispatch/model/schema HTTP/1.1" 200 None
2025-02-28 16:37:42.939 DEBUG [Thread-2331 (process_request_thread)] [connectionpool.py:241] - Starting new HTTP connection (1): plugin_daemon:5002
2025-02-28 16:37:42.939 DEBUG [Thread-2331 (process_request_thread)] [connectionpool.py:544] - http://plugin_daemon:5002 "POST /plugin/6e269332-9839-43C4-B021-7fc860d5b67a/dispatch/model/schema HTTP/1.1" 200 None
2025-02-28 16:37:42.939 DEBUG [Thread-2331 (process_request_thread)] [connectionpool.py:241] - Starting new HTTP connection (1): plugin_daemon:5002
2025-02-28 16:37:42.939 DEBUG [Thread-2331 (process_request_thread)] [connectionpool.py:544] - http://plugin_daemon:5002 "POST /plugin/6e269332-9839-43C4-B021-7fc860d5b67a/dispatch/llm/tokens HTTP/1.1" 200 None
2025-02-28 16:37:42.957 ERROR [Thread-2332 (generate_worker)] [generate_worker.py:43] - Unknown Error when generating
Traceback (most recent call last):
File "/app/api/core/app/chat/app_generator.py", line 223, in _generate_worker
runner.run()
File "/app/api/core/app/apps/chat/app_runner.py", line 58, in run
self.get_pre_calculate_rest_tokens()
File "/app/api/core/app/apps/base/app_runner.py", line 86, in get_pre_calculate_rest_tokens
prompt_tokens = model_instance.get_llm_num_tokens(prompt_messages)
File "/app/api/core/model_manager.py", line 195, in get_llm_num_tokens
self._round_robin_invoke(
File "/app/api/core/model_manager.py", line 370, in _round_robin_invoke
return function(*args, **kwargs)
File "/app/api/core/model_runtime/model_providers/_base/large_language_model.py", line 299, in get_num_tokens
return runner.get_llm_num_tokens()
File "/app/api/core/plugin/manager/model.py", line 231, in get_llm_num_tokens
for resp in response:
File "/app/api/core/plugin/manager/base.py", line 189, in _request_with_plugin_daemon_response_stream
self._handle_plugin_daemon_error(error.error_type, error.message)
File "/app/api/core/plugin/manager/base.py", line 221, in _handle_plugin_daemon_error
raise PluginInvokeError(description)
core.plugin.manager.exceptions.PluginInvokeError: PluginInvokeError: {"args": {}, "error_type":"ConnectionError", "message":"HTTPSConnectionPool(host='openapublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f20eccdc9d0>: Failed to resolve 'openapublic.blob.core.windows.net' ([Errno -2] Name or service not known)"))"}
2025-02-28 16:37:42.976 DEBUG [Thread-2331 (process_request_thread)] [process_request_thread.py:45] - error: {"args": {}, "error_type":"ConnectionError", "message":"HTTPSConnectionPool(host='openapublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f17590ec070>: Failed to resolve 'openapublic.blob.core.windows.net' ([Errno -2] Name or service not known)"))"}
2025-02-28 16:37:42.976 ERROR [Thread-2331 (process_request_thread)] [base_app_generator.py:123] - Failed to generate response: converter.
{"args": {}, "error_type":"ConnectionError", "message":"HTTPSConnectionPool(host='openapublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f17590ec070>: Failed to resolve 'openapublic.blob.core.windows.net' ([Errno -2] Name or service not known)"))"}
raise PluginInvokeError(description=message)
core.plugin.manager.exc.PluginInvokeError: PluginInvokeError: {"args": {}, "error_type":"ConnectionError", "message":"HTTPSConnectionPool(host='openapublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt - 2/encodings/main/vocab.bpe (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f5e0ccdc9d0>: Failed to resolve 'openapublic.blob.core.windows.net' ([Errno -2] Name or service not known)"))"}
2025-02-28 16:37:42.972 DEBUG [Thread-2331 (process_request_thread)] [based_generate_task_pipeline.py:45] - error: PluginInvokeError: {"args": {}, "error_type":"ConnectionError", "message":"HTTPSConnectionPool(host='openapublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt - 2/encodings/main/vocab.bpe (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f5e0ccdc9d0>: Failed to resolve 'openapublic.blob.core.windows.net' ([Errno -2] Name or service not known)"))"}
2025-02-28 16:37:42.976 ERROR [Thread-2331 (process_request_thread)] [base_app_generate_response_converter.py:123] - {"args": {}, "error_type":"ConnectionError", "message":"HTTPSConnectionPool(host='openapublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt - 2/encodings/main/vocab.bpe (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f5e0ccdc9d0>: Failed to resolve 'openapublic.blob.core.windows.net' ([Errno -2] Name or service not known)"))"}
2025-02-28 16:37:42.976 INFO [Thread-2331 (process_request_thread)] [_internal.py:97] - 172.20.0.10 -- [28/Feb/2025 08:37:42] "POST /console/api/installed - apps/e27208ae - 6eb4 - 4f72 - 9e08 - 68dc0cfbd523/chat - messages HTTP/1.1" 200 -
2025-02-28 16:37:42.983 INFO [Thread-2332 (process_request_thread)] [_internal.py:97] - 172.20.0.10 -- [28/Feb/2025 08:37:42] "OPTIONS /console/api/installed - apps/e27208ae - 6eb4 - 4f72 - 9e08 - 68dc0cfbd523/messages/answer/placeholder=174075186225/suggested - questions HTTP/1.1" 404 -
2025-02-28 16:37:43.002 INFO [Thread-2334 (process_request_thread)] [_internal.py:97] - 172.20.0.10 -- [28/Feb/2025 08:37:43] "GET /console/api/installed - apps/e27208ae - 6eb4 - 4f72 - 9e08 - 68dc0cfbd523/conversations?limit=10&pinned=false HTTP/1.1" 200 -
"Context canceled"
✔️ Expected Behavior
It can have normal conversations, and the documents can be embedded normally after being uploaded.
❌ Actual Behavior
Unable to complete the conversation and embedding.
The issue you're experiencing with Dify v1.0.0, where normal conversations and document embeddings are failing due to a ConnectionError related to 'openapublic.blob.core.windows.net', is a known problem. This error typically occurs when the system is trying to access this endpoint in an offline or restricted network environment. The same setup works normally with version v0.15.3, suggesting that the issue is specific to the changes introduced in v1.0.0 [1].
To address this, you might consider reverting to the latest stable release (v0.15.3) until v1.0.0 is officially released and these issues are resolved. Additionally, ensure that your network settings allow access to the required external resources, or consider configuring a proxy if necessary.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
The issue you're experiencing with Dify v1.0.0, where normal conversations and document embeddings are failing due to a
ConnectionErrorrelated to 'openapublic.blob.core.windows.net', is a known problem. This error typically occurs when the system is trying to access this endpoint in an offline or restricted network environment. The same setup works normally with version v0.15.3, suggesting that the issue is specific to the changes introduced in v1.0.0 [1].To address this, you might consider reverting to the latest stable release (v0.15.3) until v1.0.0 is officially released and these issues are resolved. Additionally, ensure that your network settings allow access to the required external resources, or consider configuring a proxy if necessary.
To continue talking to Dosu, mention @dosu.
Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other
But in version 0.15.3, the old version of chrome will report that toReversed is no a function. How to solve this?
您在使用 Dify v1.0.0 时遇到的问题
ConnectionError是一个已知问题,即由于与“openapublic.blob.core.windows.net”相关的问题导致正常对话和文档嵌入失败。此错误通常发生在系统尝试在离线或受限网络环境中访问此端点时。相同的设置在版本 v0.15.3 上正常运行,这表明该问题特定于 v1.0.0 [1]中引入的更改。 为了解决这个问题,您可以考虑恢复到最新的稳定版本 (v0.15.3),直到 v1.0.0 正式发布并解决这些问题。此外,请确保您的网络设置允许访问所需的外部资源,或者在必要时考虑配置代理。 要继续与Dosu交谈,请提及@dosu。 帮助 Dosu 学习!提供反馈:很好的回应|不相关的答案|不正确的来源|太冗长|幻觉|错误报告|其他但是在0.15.3版本中,旧版本的chrome会报错说toReversed不是函数,该如何解决?
This is also the reason why I was eager to upgrade to v1.0.0. The current v0.15.3 is not very compatible with the deepseek-r1 model, and it is unable to properly distinguish between the content of the thinking chain and the replied content.
I encountered the same problem, and I have dropped back to the previous 0.15.3 to ensure service availability
😔The same
api-1 | 2025-03-01 19:15:16.708 ERROR [Thread-4 (_generate_worker)] [app_generator.py:243] - Unknown Error when generating
api-1 | Traceback (most recent call last):
api-1 | File "/app/api/core/app/apps/chat/app_generator.py", line 223, in _generate_worker
api-1 | runner.run(
api-1 | File "/app/api/core/app/apps/chat/app_runner.py", line 58, in run
api-1 | self.get_pre_calculate_rest_tokens(
api-1 | File "/app/api/core/app/apps/base_app_runner.py", line 86, in get_pre_calculate_rest_tokens
api-1 | prompt_tokens = model_instance.get_llm_num_tokens(prompt_messages)
api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-1 | File "/app/api/core/model_manager.py", line 195, in get_llm_num_tokens
api-1 | self._round_robin_invoke(
api-1 | File "/app/api/core/model_manager.py", line 370, in _round_robin_invoke
api-1 | return function(*args, **kwargs)
api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
api-1 | File "/app/api/core/model_runtime/model_providers/__base/large_language_model.py", line 299, in get_num_tokens
api-1 | return plugin_model_manager.get_llm_num_tokens(
api-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-1 | File "/app/api/core/plugin/manager/model.py", line 231, in get_llm_num_tokens
api-1 | for resp in response:
api-1 | ^^^^^^^^
api-1 | File "/app/api/core/plugin/manager/base.py", line 189, in _request_with_plugin_daemon_response_stream
api-1 | self._handle_plugin_daemon_error(error.error_type, error.message)
api-1 | File "/app/api/core/plugin/manager/base.py", line 221, in _handle_plugin_daemon_error
api-1 | raise PluginInvokeError(description=message)
api-1 | core.plugin.manager.exc.PluginInvokeError: PluginInvokeError: {"args":{},"error_type":"ConnectionError","message":"HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NameResolutionError(\"\u003curllib3.connection.HTTPSConnection object at 0x7f0dea32a240\u003e: Failed to resolve 'openaipublic.blob.core.windows.net' ([Errno -3] Temporary failure in name resolution)\"))"}
api-1 | 2025-03-01 19:15:16.719 ERROR [Dummy-3] [base_app_generate_response_converter.py:123] - {"args":{},"error_type":"ConnectionError","message":"HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NameResolutionError(\"\u003curllib3.connection.HTTPSConnection object at 0x7f0dea32a240\u003e: Failed to resolve 'openaipublic.blob.core.windows.net' ([Errno -3] Temporary failure in name resolution)\"))"}
是tiktoken的锅,tiktoken_ext/openai_public.py#L17会去下载gpt2的分词器 而dify在python/dify_plugin/interfaces/model/ai_model.py#L281引用了gpt2的分词器
参考: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer 下载 vocab.bpe encoder.json 放到缓存
mkdir tiktoken
cd tiktoken
wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe
cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee
wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json
cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6
挂载 tiktoken 缓存
environment:
TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/
...
volumes:
- ./volumes/plugin_daemon:/app/storage
- ./tiktoken:/app/api/.tiktoken
reference: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer download vocab.bpe encoder.json
mkdir tiktoken
cd tiktoken
wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe
cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee
wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json
cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6
Mount tiktoken cache
environment:
TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/
...
volumes:
- ./volumes/plugin_daemon:/app/storage
- ./tiktoken:/app/api/.tiktoken
参考: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer 下载 vocab.bpe encoder.json 放到缓存
mkdir tiktoken cd tiktoken wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6挂载 tiktoken 缓存
environment: TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ ... volumes: - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktokenreference: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer download vocab.bpe encoder.json
mkdir tiktoken cd tiktoken wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6Mount tiktoken cache
environment: TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ ... volumes: - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktoken
The method has been tested and confirmed effective. Thank you.
是tiktoken的锅,tiktoken_ext/openai_public.py#L17会去下载gpt2的分词器 而dify在python/dify_plugin/interfaces/model/ai_model.py#L281引用了gpt2的分词器
请问怎么解决呢
参考: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer 下载 vocab.bpe encoder.json 放到缓存
mkdir tiktoken cd tiktoken wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6挂载 tiktoken 缓存
environment: TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ ... volumes: - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktokenreference: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer download vocab.bpe encoder.json
mkdir tiktoken cd tiktoken wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6Mount tiktoken cache
environment: TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ ... volumes: - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktoken
挂载哪个容器?
是tiktoken的锅,tiktoken_ext/openai_public.py#L17会去下载gpt2的分词器 而dify在python/dify_plugin/interfaces/model/ai_model.py#L281引用了gpt2的分词器
请问怎么解决呢
@momomobinx 已经给出了解决方案 但挂载只是临时方案,建议项目组写进Dockerfile,打进镜像
This problem usually occurs when splitting knowledge base files, and the solution is as follows: [1] Create folder path and download files: cd dify/docker mkdir tiktoken cd tiktoken Download file: https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe Rename file: mv vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee Download file: https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json Rename file: mv encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6 Copy two files to dify/docker/tiktoken Modify permissions, otherwise an error will occur when embedding the vector database, 'cannot release a lock that's no longer owned' sudo chmod 777 6d1cbeee0f20b3d9449abfede4726ed8212e3aee
[2] Mount tiktoken cache to become persistent sudo vim dify/docker/docker-compose.yml #Add the following content
services:
API service
api: image: langgenius/dify-api:1.0.1 restart: always environment: ... TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ #add to the content volumes: ... - ./volumes/app/storage:/app/api/storage - ./tiktoken:/app/api/.tiktoken #add to the content networks: - ssrf_proxy_network - default
worker service
The Celery worker for processing the queue.
worker: image: langgenius/dify-api:1.0.1 restart: always environment: ... TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ #add to the content volumes: ... - ./volumes/app/storage:/app/api/storage - ./tiktoken:/app/api/.tiktoken #add to the content networks: - ssrf_proxy_network - default
plugin daemon
plugin_daemon: image: langgenius/dify-plugin-daemon:0.0.4-local restart: always environment: ... TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ #add to the content volumes: ... - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktoken #add to the content depends_on: - db
At this point, using Docker Compose down, Docker Compose up - d again will not clear the cache, and there is no need to connect to the internet to use it.
参考: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer 参考:https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer 下载 vocab.bpe encoder.json 放到缓存
mkdir tiktoken cd tiktoken wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6挂载 tiktoken 缓存
environment: TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ ... volumes: - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktokenreference: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer 参考:https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer download vocab.bpe encoder.json下载vocab.bpe encoder.json
mkdir tiktoken cd tiktoken wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6Mount tiktoken cache 安装tiktoken缓存
environment: TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ ... volumes: - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktoken
可用点个赞
参考: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer 下载 vocab.bpe encoder.json 放到缓存
mkdir tiktoken cd tiktoken wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6挂载 tiktoken 缓存
environment: TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ ... volumes: - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktokenreference: https://stackoverflow.com/questions/76106366/how-to-use-tiktoken-in-offline-mode-computer download vocab.bpe encoder.json
mkdir tiktoken cd tiktoken wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/vocab.bpe cp vocab.bpe 6d1cbeee0f20b3d9449abfede4726ed8212e3aee wget https://openaipublic.blob.core.windows.net/gpt-2/encodings/main/encoder.json cp encoder.json 6c7ea1a7e38e3a7f062df639a5b80947f075ffe6Mount tiktoken cache
environment: TIKTOKEN_CACHE_DIR: /app/api/.tiktoken/ ... volumes: - ./volumes/plugin_daemon:/app/storage - ./tiktoken:/app/api/.tiktoken
this change will be release in plugin-daemon 0.0.6, thanks for your solution! PR: https://github.com/langgenius/dify-plugin-daemon/pull/127
in 1.1.3 still, i still see this issue.......