dify icon indicating copy to clipboard operation
dify copied to clipboard

dify-1.0.0 internal server error

Open terrysun1216 opened this issue 10 months ago • 2 comments

Self Checks

  • [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
  • [x] I have searched for existing issues search for existing issues, including closed ones.
  • [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [x] Please do not modify this template :) and fill in all the required fields.

Dify version

1.0.0

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

is it due to lack of disk space? the problem shows right after disk space alarm from ubuntu. will it works when we migrate all data to a bigger disk?

✔️ Expected Behavior

No response

❌ Actual Behavior

dify-plugin-daemon:32cd95f9d49bca005d5c594c980ff55d7e40a0c1-local:
2025/03/11 08:48:02 cluster_lifetime.go:124: [ERROR]failed to update the master: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
2025/03/11 08:48:03 cluster_lifetime.go:129: [ERROR]failed to update the status of the node: lock timeout
2025/03/11 08:48:03 cluster_lifetime.go:124: [ERROR]failed to update the master: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
2025/03/11 08:48:04 cluster_lifetime.go:124: [ERROR]failed to update the master: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error

weaviate:1.19.0 
{"action":"startup","default_vectorizer_module":"none","level":"info","msg":"the default vectorizer modules is set to \"none\", as a result all new schema classes without an explicit vectorizer setting, will use this vectorizer","time":"2025-03-11T08:41:25Z"}
{"action":"startup","auto_schema_enabled":true,"level":"info","msg":"auto schema enabled setting is set to \"true\"","time":"2025-03-11T08:41:25Z"}
{"action":"hnsw_vector_cache_prefill","count":50000,"index_id":"vector_index_6dd9f94a_9f17_4c04_b541_12879fe75e27_node_MaBuhtJvIMZh","level":"info","limit":1000000000000,"msg":"prefilled vector cache","time":"2025-03-11T08:41:27Z","took":714144}
{"action":"grpc_startup","level":"info","msg":"grpc server listening at [::]:50051","time":"2025-03-11T08:41:29Z"}
{"action":"restapi_management","level":"info","msg":"Serving weaviate at http://[::]:8080","time":"2025-03-11T08:41:29Z"}
{"action":"hnsw_vector_cache_prefill","count":1843976,"index_id":"vector_index_e73c26db_b793_4611_9fa1_3f41cd83d66f_node_oje4zUatqWEM","level":"info","limit":1000000000000,"msg":"prefilled vector cache","time":"2025-03-11T08:41:36Z","took":8704748429}
{"action":"hnsw_vector_cache_prefill","count":621043,"index_id":"vector_index_7ba0c312_3ddc_4373_9114_29e7bbb38089_node_IcnerXEW4WKn","level":"info","limit":1000000000000,"msg":"prefilled vector cache","time":"2025-03-11T08:41:39Z","took":9826110090}
{"action":"read_disk_use","level":"warning","msg":"disk usage currently at 94.86%, threshold set to 80.00%","path":"/var/lib/weaviate","time":"2025-03-11T08:41:59Z"}
{"action":"lsm_compaction","class":"Vector_index_e73c26db_b793_4611_9fa1_3f41cd83d66f_Node","index":"vector_index_e73c26db_b793_4611_9fa1_3f41cd83d66f_node","level":"warning","msg":"compaction halted due to shard READONLY status","path":"/var/lib/weaviate/vector_index_e73c26db_b793_4611_9fa1_3f41cd83d66f_node_oje4zUatqWEM_lsm","shard":"oje4zUatqWEM","time":"2025-03-11T08:41:59Z"}
{"action":"lsm_compaction","class":"Vector_index_6dd9f94a_9f17_4c04_b541_12879fe75e27_Node","index":"vector_index_6dd9f94a_9f17_4c04_b541_12879fe75e27_node","level":"warning","msg":"compaction halted due to shard READONLY status","path":"/var/lib/weaviate/vector_index_6dd9f94a_9f17_4c04_b541_12879fe75e27_node_MaBuhtJvIMZh_lsm","shard":"MaBuhtJvIMZh","time":"2025-03-11T08:41:59Z"}
{"action":"lsm_compaction","class":"Vector_index_7ba0c312_3ddc_4373_9114_29e7bbb38089_Node","index":"vector_index_7ba0c312_3ddc_4373_9114_29e7bbb38089_node","level":"warning","msg":"compaction halted due to shard READONLY status","path":"/var/lib/weaviate/vector_index_7ba0c312_3ddc_4373_9114_29e7bbb38089_node_IcnerXEW4WKn_lsm","shard":"IcnerXEW4WKn","time":"2025-03-11T08:41:59Z"}
{"action":"set_shard_read_only","level":"warning","msg":"Set READONLY, disk usage currently at 94.86%, threshold set to 90.00%","path":"/var/lib/weaviate","time":"2025-03-11T08:41:59Z"}

terrysun1216 avatar Mar 11 '25 08:03 terrysun1216

Seems you are running out of the spaces.

crazywoola avatar Mar 11 '25 08:03 crazywoola

I will try to migrate all data from this disk to a bigger one. see if it works without further adjustments.

terrysun1216 avatar Mar 11 '25 09:03 terrysun1216