dify icon indicating copy to clipboard operation
dify copied to clipboard

Issue with Setting Up Admin Account: Long Loading Time without Success

Open goldeneave opened this issue 1 year ago • 9 comments

Self Checks

  • [X] This is only for bug report, if you would like to ask a question, please head to Discussions.
  • [X] I have searched for existing issues search for existing issues, including closed ones.
  • [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [X] Please do not modify this template :) and fill in all the required fields.

Dify version

0.6.8

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

I encountered an issue during the setup process of Dify where I am unable to successfully create an admin account. After clicking the "Set Up Admin Account" button, the web page remains in a loading state indefinitely without completing the process. The page continuously loads and does not proceed beyond the "Set Up Admin Account" stage. There is no error message displayed, and the process does not complete even after an extended period.

✔️ Expected Behavior

After filling out the admin account details and clicking the setup button, I expect the system to create the account and proceed to the next configuration step or dashboard.

❌ Actual Behavior

The page continuously loads and does not proceed beyond the "Set Up Admin Account" stage. There is no error message displayed, and the process does not complete even after an extended period.

goldeneave avatar May 21 '24 10:05 goldeneave

To resolve the issue of the "Set Up Admin Account" page continuously loading and not proceeding in Dify version 0.6.8 when self-hosted using Docker, follow these steps:

  1. Check Container Status: Ensure that all the required containers are running successfully. Use the following command to check the status of the containers:

    docker compose ps
    

    Make sure that all the containers, including api, worker, web, weaviate, db, redis, and nginx, are up and running. If any container is not running, you may need to troubleshoot the specific container.

  2. Restart Containers: If all containers are running but the issue persists, try restarting the containers:

    cd dify/docker
    docker compose down
    docker compose up -d
    
  3. Verify Environment Variables: Ensure that the environment variables in your docker-compose.yaml file are correctly set, especially the INIT_PASSWORD and MIGRATION_ENABLED variables. Ensure that MIGRATION_ENABLED is set to 'true' to allow migrations to run before the application starts:

    api:
      environment:
        INIT_PASSWORD: 'your_admin_password'
        MIGRATION_ENABLED: 'true'
    
  4. Check for Specific Errors: Check the logs for any of the following errors:

    • AlreadySetupError: Indicates that Dify has already been successfully installed.
    • NotSetupError: Indicates that Dify has not been initialized and installed yet.
    • NotInitValidateError: Indicates that init validation has not been completed yet.
    • InitValidateFailedError: Indicates that init validation failed.
    • AccountNotLinkTenantError: Indicates that the account is not linked to a tenant.
    • AlreadyActivateError: Indicates that the auth token is invalid or the account is already activated.
  5. Frontend Code Verification: Ensure that the frontend code responsible for the setup process is functioning correctly. The InstallForm component in installForm.tsx handles the setup process and redirects based on the setup status. Ensure that the fetchSetupStatus and fetchInitValidateStatus functions are returning the expected results and that the form submission is correctly handled.

  6. CORS Issues: If the page keeps loading and checking the request prompts a CORS error, this may be because the domain name/URL has been switched, resulting in cross-domain issues between the front end and the back end. Please change all the following configuration items in docker-compose.yml to the new domain name:

    • CONSOLE_API_URL
    • CONSOLE_WEB_URL
    • SERVICE_API_URL
    • APP_API_URL
    • APP_WEB_URL

By following these steps, you should be able to resolve the issue and proceed with the admin account setup. If you encounter any specific errors or issues during these steps, please provide more details so I can assist you further.

Details

Sources


About Dosu This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

dosubot[bot] avatar May 21 '24 10:05 dosubot[bot]

Please provide some container logs so that we could help

crazywoola avatar May 21 '24 10:05 crazywoola

I have attached the logs below as requested. Please let me know if you need any more information or further clarification.

[pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
2024-05-21T11:45:55: PM2 log: Launching in no daemon mode
2024-05-21T11:45:55: PM2 log: [PM2][WARN] Applications dify-web not running, starting...
2024-05-21T11:45:55: PM2 log: App [dify-web:0] starting in -cluster mode-
2024-05-21T11:45:55: PM2 log: App [dify-web:0] online
2024-05-21T11:45:55: PM2 log: App [dify-web:1] starting in -cluster mode-
2024-05-21T11:45:55: PM2 log: App [dify-web:1] online
2024-05-21T11:45:55: PM2 log: [PM2] App [dify-web] launched (2 instances)
2024-05-21T11:45:55: PM2 log: ┌────┬─────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name        │ namespace   │ version │ mode    │ pid      │ uptime │ ↺    │ status    │ cpu      │ mem      │ user     │ watching │
├────┼─────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0  │ dify-web    │ default     │ 0.6.8   │ cluster │ 18       │ 0s     │ 0    │ online    │ 0%       │ 50.6mb   │ root     │ disabled │
│ 1  │ dify-web    │ default     │ 0.6.8   │ cluster │ 25       │ 0s     │ 0    │ online    │ 0%       │ 46.6mb   │ root     │ disabled │
└────┴─────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
2024-05-21T11:45:55: PM2 log: [--no-daemon] Continue to stream logs
2024-05-21T11:45:55: PM2 log: [--no-daemon] Exit on target PM2 exit pid=7
11:45:55 1|dify-web  |    ▲ Next.js 14.1.0
11:45:55 1|dify-web  |    - Local:        http://d936d3fa8f1b:3000
11:45:55 1|dify-web  |    - Network:      http://192.168.80.5:3000
11:45:55 0|dify-web  |    ▲ Next.js 14.1.0
11:45:55 0|dify-web  |    - Local:        http://d936d3fa8f1b:3000
11:45:55 0|dify-web  |    - Network:      http://192.168.80.5:3000
11:45:55 0|dify-web  |  ✓ Ready in 51ms
11:45:55 1|dify-web  |  ✓ Ready in 68ms
(base) user@user-System-Product-Name:~/documents/dify/docker$ docker-compose logs
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.config-hash%22%3Atrue%2C%22com.docker.compose.oneoff%3DFalse%22%3Atrue%2C%22com.docker.compose.project%3Ddocker%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
(base) user@user-System-Product-Name:~/documents/dify/docker$ sudo docker-compose logs
docker-sandbox-1  | 2024/05/21 11:45:54 nodejs.go:32: [INFO]initializing nodejs runner environment...
docker-sandbox-1  | 2024/05/21 11:45:54 nodejs.go:91: [INFO]nodejs runner environment initialized
docker-db-1       | 
docker-api-1      | Running migrations
docker-api-1      | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
docker-api-1      | INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
docker-db-1       | PostgreSQL Database directory appears to contain a database; Skipping initialization
docker-api-1      | INFO  [alembic.runtime.migration] Will assume transactional DDL.
docker-db-1       | 
docker-sandbox-1  | 2024/05/21 11:45:54 setup.go:22: [INFO]initializing python runner environment...
docker-redis-1    | 1:C 21 May 2024 11:45:54.754 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
docker-sandbox-1  | 2024/05/21 11:45:54 setup.go:35: [INFO]python runner environment initialized
docker-redis-1    | 1:C 21 May 2024 11:45:54.754 # Redis version=6.2.14, bits=64, commit=00000000, modified=0, pid=1, just started
docker-worker-1   | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
docker-worker-1   | /usr/local/lib/python3.10/site-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
docker-sandbox-1  | 2024/05/21 11:45:54 config.go:86: [INFO]network has been enabled
docker-db-1       | 2024-05-21 11:45:54.899 UTC [1] LOG:  starting PostgreSQL 15.7 on x86_64-pc-linux-musl, compiled by gcc (Alpine 13.2.1_git20231014) 13.2.1 20231014, 64-bit
docker-worker-1    | absolutely not recommended!
docker-worker-1    | 
docker-worker-1    | Please specify a different user using the --uid option.
docker-worker-1    | 
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-db-1        | 2024-05-21 11:45:54.899 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
docker-nginx-1    | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
docker-nginx-1       | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
docker-nginx-1       | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
docker-nginx-1       | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
docker-nginx-1       | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
docker-sandbox-1   | 2024/05/21 11:45:54 config.go:102: [INFO]using https proxy: http://ssrf_proxy:3128
docker-nginx-1       | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
docker-web-1      | 
docker-web-1         |                         -------------
docker-api-1      | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
docker-worker-1      | User information: uid=0 euid=0 gid=0 egid=0
docker-worker-1      | 
docker-worker-1      |   warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
docker-worker-1      |  
docker-worker-1      |  -------------- celery@b08b79fc92bf v5.3.6 (emerald-rush)
docker-worker-1      | --- ***** ----- 
docker-worker-1      | -- ******* ---- Linux-5.4.0-152-generic-x86_64-with-glibc2.36 2024-05-21 11:46:00
docker-db-1          | 2024-05-21 11:45:54.899 UTC [1] LOG:  listening on IPv6 address "::", port 5432
docker-web-1         | 
docker-redis-1    | 1:C 21 May 2024 11:45:54.754 # Configuration loaded
docker-redis-1       | 1:M 21 May 2024 11:45:54.754 * monotonic clock: POSIX clock_gettime
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 * Running mode=standalone, port=6379.
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 # Server initialized
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Processing Configuration File: /etc/squid/squid.conf (depth 0)
docker-db-1          | 2024-05-21 11:45:54.900 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-weaviate-1  | {"action":"startup","default_vectorizer_module":"none","level":"info","msg":"the default vectorizer modules is set to \"none\", as a result all new schema classes without an explicit vectorizer setting, will use this vectorizer","time":"2024-05-21T11:45:54Z"}
docker-weaviate-1    | {"action":"startup","auto_schema_enabled":true,"level":"info","msg":"auto schema enabled setting is set to \"true\"","time":"2024-05-21T11:45:54Z"}
docker-weaviate-1    | {"action":"grpc_startup","level":"info","msg":"grpc server listening at [::]:50051","time":"2024-05-21T11:45:54Z"}
docker-weaviate-1    | {"action":"restapi_management","level":"info","msg":"Serving weaviate at http://[::]:8080","time":"2024-05-21T11:45:54Z"}
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-web-1         | __/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
docker-web-1         |  _\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
docker-web-1         |   _\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
docker-api-1         | [2024-05-21 11:46:04 +0000] [58] [INFO] Starting gunicorn 22.0.0
docker-api-1         | [2024-05-21 11:46:04 +0000] [58] [INFO] Listening at: http://0.0.0.0:5001 (58)
docker-api-1         | [2024-05-21 11:46:04 +0000] [58] [INFO] Using worker: gevent
docker-api-1         | [2024-05-21 11:46:04 +0000] [109] [INFO] Booting worker with pid: 109
docker-db-1          | 2024-05-21 11:45:54.902 UTC [24] LOG:  database system was shut down at 2024-05-21 11:45:46 UTC
docker-nginx-1       | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
docker-db-1          | 2024-05-21 11:45:54.904 UTC [1] LOG:  database system is ready to accept connections
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 * Loading RDB produced by version 6.2.14
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 * RDB age 8 seconds
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 * RDB memory usage when created 0.78 Mb
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'
docker-sandbox-1     | 2024/05/21 11:45:54 config.go:111: [INFO]using http proxy: http://ssrf_proxy:3128
docker-sandbox-1     | 2024/05/21 11:45:54 server.go:19: [INFO]config init success
docker-sandbox-1     | 2024/05/21 11:45:54 server.go:25: [INFO]runner dependencies init success
docker-sandbox-1     | 2024/05/21 11:45:54 server.go:42: [INFO]installing python dependencies...
docker-sandbox-1     | 2024/05/21 11:45:54 server.go:48: [INFO]python dependencies installed
docker-sandbox-1     | 2024/05/21 11:45:54 cocrrent.go:31: [INFO]setting max requests to 50
docker-nginx-1       | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
docker-web-1         |    _\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
docker-nginx-1       | /docker-entrypoint.sh: Configuration complete; ready for start up
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 # Done loading RDB, keys loaded: 5, keys expired: 0.
docker-worker-1      | - *** --- * --- 
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 * DB loaded from disk: 0.000 seconds
docker-web-1         |     _\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: because of this '::/0' is ignored to keep splay tree searching predictable
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: You should probably remove '::/0' from the ACL named 'all'
docker-worker-1      | - ** ---------- [config]
docker-worker-1      | - ** ---------- .> app:         app:0x7fd12c782920
docker-worker-1      | - ** ---------- .> transport:   redis://:**@redis:6379/1
docker-worker-1      | - ** ---------- .> results:     postgresql://postgres:**@db:5432/dify
docker-web-1         |      _\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: using the "epoll" event method
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: nginx/1.25.5
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14) 
docker-worker-1      | - *** --- * --- .> concurrency: 1 (gevent)
docker-sandbox-1     | 2024/05/21 11:45:54 cocrrent.go:13: [INFO]setting max workers to 4
docker-sandbox-1     | [GIN] 2024/05/21 - 11:46:33 | 401 |       4.862µs |              :: | GET      "/squid-internal-dynamic/netdb"
docker-worker-1      | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
docker-worker-1      | --- ***** ----- 
docker-worker-1      |  -------------- [queues]
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: OS: Linux 5.4.0-152-generic
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Created PID file (/run/squid.pid)
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Set Current Directory to /var/spool/squid
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Creating missing swap directories
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| No cache_dir stores are configured.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Removing PID file (/run/squid.pid)
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-redis-1       | 1:M 21 May 2024 11:45:54.755 * Ready to accept connections
docker-worker-1      |                 .> dataset          exchange=dataset(direct) key=dataset
docker-worker-1      |                 .> generation       exchange=generation(direct) key=generation
docker-worker-1      |                 .> mail             exchange=mail(direct) key=mail
docker-worker-1      | 
docker-worker-1      | [tasks]
docker-worker-1      |   . schedule.clean_embedding_cache_task.clean_embedding_cache_task
docker-worker-1      |   . schedule.clean_unused_datasets_task.clean_unused_datasets_task
docker-worker-1      |   . tasks.add_document_to_index_task.add_document_to_index_task
docker-worker-1      |   . tasks.annotation.add_annotation_to_index_task.add_annotation_to_index_task
docker-worker-1      |   . tasks.annotation.batch_import_annotations_task.batch_import_annotations_task
docker-worker-1      |   . tasks.annotation.delete_annotation_index_task.delete_annotation_index_task
docker-worker-1      |   . tasks.annotation.disable_annotation_reply_task.disable_annotation_reply_task
docker-web-1         |       _\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
docker-web-1         |        _\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
docker-web-1         |         _\///______________\///______________\///__\///////////////__
docker-web-1         | 
docker-web-1         | 
docker-web-1         |                           Runtime Edition
docker-web-1         | 
docker-web-1         |         PM2 is a Production Process Manager for Node.js applications
docker-web-1         |                      with a built-in Load Balancer.
docker-web-1         | 
docker-web-1         |                 Start and Daemonize any application:
docker-web-1         |                 $ pm2 start app.js
docker-web-1         | 
docker-web-1         |                 Load Balance 4 instances of api.js:
docker-web-1         |                 $ pm2 start api.js -i 4
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker processes
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 28
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 29
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 30
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 31
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 32
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 33
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 34
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 35
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 36
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 37
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 38
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 39
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 40
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 41
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 42
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 43
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 44
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 45
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 46
docker-nginx-1       | 2024/05/21 11:45:55 [notice] 1#1: start worker process 47
docker-worker-1      |   . tasks.annotation.enable_annotation_reply_task.enable_annotation_reply_task
docker-worker-1      |   . tasks.annotation.update_annotation_to_index_task.update_annotation_to_index_task
docker-worker-1      |   . tasks.batch_create_segment_to_index_task.batch_create_segment_to_index_task
docker-worker-1      |   . tasks.clean_dataset_task.clean_dataset_task
docker-worker-1      |   . tasks.clean_document_task.clean_document_task
docker-worker-1      |   . tasks.clean_notion_document_task.clean_notion_document_task
docker-worker-1      |   . tasks.deal_dataset_vector_index_task.deal_dataset_vector_index_task
docker-worker-1      |   . tasks.delete_segment_from_index_task.delete_segment_from_index_task
docker-worker-1      |   . tasks.disable_segment_from_index_task.disable_segment_from_index_task
docker-worker-1      |   . tasks.document_indexing_sync_task.document_indexing_sync_task
docker-worker-1      |   . tasks.document_indexing_task.document_indexing_task
docker-worker-1      |   . tasks.document_indexing_update_task.document_indexing_update_task
docker-worker-1      |   . tasks.duplicate_document_indexing_task.duplicate_document_indexing_task
docker-worker-1      |   . tasks.enable_segment_to_index_task.enable_segment_to_index_task
docker-worker-1      |   . tasks.mail_invite_member_task.send_invite_member_mail_task
docker-worker-1      |   . tasks.recover_document_indexing_task.recover_document_indexing_task
docker-worker-1      |   . tasks.remove_document_from_index_task.remove_document_from_index_task
docker-worker-1      |   . tasks.retry_document_indexing_task.retry_document_indexing_task
docker-worker-1      | 
docker-worker-1      | [2024-05-21 11:46:00,386: INFO/MainProcess] Connected to redis://:**@redis:6379/1
docker-worker-1      | [2024-05-21 11:46:00,387: INFO/MainProcess] mingle: searching for neighbors
docker-worker-1      | [2024-05-21 11:46:01,392: INFO/MainProcess] mingle: all alone
docker-worker-1      | [2024-05-21 11:46:01,402: INFO/MainProcess] celery@b08b79fc92bf ready.
docker-worker-1      | [2024-05-21 11:46:01,403: INFO/MainProcess] pidbox: Connected to redis://:**@redis:6379/1.
docker-web-1         | 
docker-web-1         |                 Monitor in production:
docker-web-1         |                 $ pm2 monitor
docker-web-1         | 
docker-web-1         |                 Make pm2 auto-boot at server restart:
docker-web-1         |                 $ pm2 startup
docker-web-1         | 
docker-web-1         |                 To go further checkout:
docker-web-1         |                 http://pm2.io/
docker-web-1         | 
docker-web-1         | 
docker-web-1         |                         -------------
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Processing Configuration File: /etc/squid/squid.conf (depth 0)
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: because of this '::/0' is ignored to keep splay tree searching predictable
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: You should probably remove '::/0' from the ACL named 'all'
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Created PID file (/run/squid.pid)
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Set Current Directory to /var/spool/squid
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Creating missing swap directories
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| No cache_dir stores are configured.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Removing PID file (/run/squid.pid)
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Processing Configuration File: /etc/squid/squid.conf (depth 0)
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: because of this '::/0' is ignored to keep splay tree searching predictable
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| WARNING: You should probably remove '::/0' from the ACL named 'all'
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Created PID file (/run/squid.pid)
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Set Current Directory to /var/spool/squid
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Starting Squid Cache version 6.1 for x86_64-pc-linux-gnu...
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Service Name: squid
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Process ID 40
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Process Roles: master worker
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| With 1048576 file descriptors available
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Initializing IP Cache...
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| DNS IPv4 socket created at 0.0.0.0, FD 8
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Adding nameserver 127.0.0.11 from /etc/resolv.conf
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Adding ndots 1 from /etc/resolv.conf
docker-web-1         | 
docker-web-1         | pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
docker-web-1         | 2024-05-21T11:45:55: PM2 log: Launching in no daemon mode
docker-web-1         | 2024-05-21T11:45:55: PM2 log: [PM2][WARN] Applications dify-web not running, starting...
docker-web-1         | 2024-05-21T11:45:55: PM2 log: App [dify-web:0] starting in -cluster mode-
docker-web-1         | 2024-05-21T11:45:55: PM2 log: App [dify-web:0] online
docker-web-1         | 2024-05-21T11:45:55: PM2 log: App [dify-web:1] starting in -cluster mode-
docker-web-1         | 2024-05-21T11:45:55: PM2 log: App [dify-web:1] online
docker-web-1         | 2024-05-21T11:45:55: PM2 log: [PM2] App [dify-web] launched (2 instances)
docker-web-1         | 2024-05-21T11:45:55: PM2 log: ┌────┬─────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
docker-web-1         | │ id │ name        │ namespace   │ version │ mode    │ pid      │ uptime │ ↺    │ status    │ cpu      │ mem      │ user     │ watching │
docker-web-1         | ├────┼─────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
docker-web-1         | │ 0  │ dify-web    │ default     │ 0.6.8   │ cluster │ 18       │ 0s     │ 0    │ online    │ 0%       │ 50.6mb   │ root     │ disabled │
docker-web-1         | │ 1  │ dify-web    │ default     │ 0.6.8   │ cluster │ 25       │ 0s     │ 0    │ online    │ 0%       │ 46.6mb   │ root     │ disabled │
docker-web-1         | └────┴─────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
docker-web-1         | 2024-05-21T11:45:55: PM2 log: [--no-daemon] Continue to stream logs
docker-web-1         | 2024-05-21T11:45:55: PM2 log: [--no-daemon] Exit on target PM2 exit pid=7
docker-web-1         | 11:45:55 1|dify-web  |    ▲ Next.js 14.1.0
docker-web-1         | 11:45:55 1|dify-web  |    - Local:        http://d936d3fa8f1b:3000
docker-web-1         | 11:45:55 1|dify-web  |    - Network:      http://192.168.80.5:3000
docker-web-1         | 11:45:55 0|dify-web  |    ▲ Next.js 14.1.0
docker-web-1         | 11:45:55 0|dify-web  |    - Local:        http://d936d3fa8f1b:3000
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Logfile: opening log daemon:/var/log/squid/access.log
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Logfile Daemon: opening log /var/log/squid/access.log
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Store logging disabled
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Swap maxSize 0 + 262144 KB, estimated 20164 objects
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Target number of buckets: 1008
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Using 8192 Store buckets
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Max Mem  size: 262144 KB
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Max Swap size: 0 KB
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Using Least Load store dir selection
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Set Current Directory to /var/spool/squid
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Finished loading MIME types and icons.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| HTCP Disabled.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Pinger socket opened on FD 14
docker-web-1         | 11:45:55 0|dify-web  |    - Network:      http://192.168.80.5:3000
docker-web-1         | 11:45:55 0|dify-web  |  ✓ Ready in 51ms
docker-web-1         | 11:45:55 1|dify-web  |  ✓ Ready in 68ms
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Squid plugin modules loaded: 0
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Adaptation support is off.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Accepting HTTP Socket connections at conn2 local=0.0.0.0:3128 remote=[::] FD 11 flags=9
docker-ssrf_proxy-1  |     listening port: 3128
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Accepting reverse-proxy HTTP Socket connections at conn4 local=0.0.0.0:8194 remote=[::] FD 12 flags=9
docker-ssrf_proxy-1  |     listening port: 8194
docker-ssrf_proxy-1  | 2024/05/21 11:45:55| Configuring Parent sandbox
docker-ssrf_proxy-1  | 2024/05/21 11:45:55 pinger| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55 pinger| Initialising ICMP pinger ...
docker-ssrf_proxy-1  | 2024/05/21 11:45:55 pinger| ICMP socket opened.
docker-ssrf_proxy-1  | 2024/05/21 11:45:55 pinger| ICMPv6 socket opened
docker-ssrf_proxy-1  | 2024/05/21 11:45:56| storeLateRelease: released 0 objects

goldeneave avatar May 21 '24 11:05 goldeneave

PostgreSQL Database directory appears to contain a database; Skipping initialization

It seems you have already setup an admin account? Is this a fresh install?

crazywoola avatar May 21 '24 15:05 crazywoola

PostgreSQL Database directory appears to contain a database; Skipping initialization

It seems you have already setup an admin account? Is this a fresh install?

— Reply to this email directly, view it on GitHub https://github.com/langgenius/dify/issues/4553#issuecomment-2122841701, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANN257YDEWY3A5ODTOH3XMDZDNOZZAVCNFSM6AAAAABIBMBKRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMRSHA2DCNZQGE . You are receiving this because you were mentioned.Message ID: @.***>

Yes it is fresh install and I set it in yaml file manually. I will delete the docker and try to restart it. If it still not work, I will attach new logs info here

goldeneave avatar May 21 '24 21:05 goldeneave

I delete the docker container and restart the web, it still remains in loading process, and the new log attached here:

permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.config-hash%22%3Atrue%2C%22com.docker.compose.oneoff%3DFalse%22%3Atrue%2C%22com.docker.compose.project%3Ddocker%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
(base) user@user-System-Product-Name:~/documents/dify/docker$ sudo docker-compose logs
docker-db-1  | 
docker-db-1  | PostgreSQL Database directory appears to contain a database; Skipping initialization
docker-db-1     | 
docker-redis-1  | 1:C 22 May 2024 06:32:19.695 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
docker-redis-1  | 1:C 22 May 2024 06:32:19.695 # Redis version=6.2.14, bits=64, commit=00000000, modified=0, pid=1, just started
docker-redis-1  | 1:C 22 May 2024 06:32:19.695 # Configuration loaded
docker-db-1     | 2024-05-22 06:32:19.586 UTC [1] LOG:  starting PostgreSQL 15.7 on x86_64-pc-linux-musl, compiled by gcc (Alpine 13.2.1_git20231014) 13.2.1 20231014, 64-bit
docker-db-1     | 2024-05-22 06:32:19.586 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
docker-db-1     | 2024-05-22 06:32:19.586 UTC [1] LOG:  listening on IPv6 address "::", port 5432
docker-worker-1  | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
docker-redis-1  | 1:M 22 May 2024 06:32:19.696 * monotonic clock: POSIX clock_gettime
docker-redis-1   | 1:M 22 May 2024 06:32:19.697 * Running mode=standalone, port=6379.
docker-api-1    | Running migrations
docker-api-1     | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
docker-api-1     | INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
docker-redis-1   | 1:M 22 May 2024 06:32:19.697 # Server initialized
docker-db-1      | 2024-05-22 06:32:19.587 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
docker-redis-1   | 1:M 22 May 2024 06:32:19.697 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
docker-redis-1   | 1:M 22 May 2024 06:32:19.698 * Loading RDB produced by version 6.2.14
docker-redis-1   | 1:M 22 May 2024 06:32:19.698 * RDB age 36 seconds
docker-redis-1   | 1:M 22 May 2024 06:32:19.698 * RDB memory usage when created 0.78 Mb
docker-redis-1   | 1:M 22 May 2024 06:32:19.698 # Done loading RDB, keys loaded: 5, keys expired: 0.
docker-web-1     | 
docker-weaviate-1    | {"action":"startup","default_vectorizer_module":"none","level":"info","msg":"the default vectorizer modules is set to \"none\", as a result all new schema classes without an explicit vectorizer setting, will use this vectorizer","time":"2024-05-22T06:32:19Z"}
docker-weaviate-1    | {"action":"startup","auto_schema_enabled":true,"level":"info","msg":"auto schema enabled setting is set to \"true\"","time":"2024-05-22T06:32:19Z"}
docker-redis-1   | 1:M 22 May 2024 06:32:19.698 * DB loaded from disk: 0.000 seconds
docker-redis-1       | 1:M 22 May 2024 06:32:19.698 * Ready to accept connections
docker-api-1     | INFO  [alembic.runtime.migration] Will assume transactional DDL.
docker-weaviate-1    | {"action":"grpc_startup","level":"info","msg":"grpc server listening at [::]:50051","time":"2024-05-22T06:32:19Z"}
docker-api-1         | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
docker-db-1      | 2024-05-22 06:32:19.589 UTC [24] LOG:  database system was shut down at 2024-05-22 06:31:43 UTC
docker-db-1          | 2024-05-22 06:32:19.592 UTC [1] LOG:  database system is ready to accept connections
docker-nginx-1   | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
docker-nginx-1       | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
docker-nginx-1       | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
docker-worker-1  | /usr/local/lib/python3.10/site-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
docker-web-1       |                         -------------
docker-worker-1      | absolutely not recommended!
docker-web-1         | 
docker-web-1         | __/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
docker-web-1         |  _\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
docker-api-1         | [2024-05-22 06:32:30 +0000] [58] [INFO] Starting gunicorn 22.0.0
docker-api-1         | [2024-05-22 06:32:30 +0000] [58] [INFO] Listening at: http://0.0.0.0:5001 (58)
docker-api-1         | [2024-05-22 06:32:30 +0000] [58] [INFO] Using worker: gevent
docker-api-1         | [2024-05-22 06:32:30 +0000] [109] [INFO] Booting worker with pid: 109
docker-nginx-1       | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
docker-weaviate-1    | {"action":"restapi_management","level":"info","msg":"Serving weaviate at http://[::]:8080","time":"2024-05-22T06:32:19Z"}
docker-sandbox-1     | 2024/05/22 06:32:19 nodejs.go:32: [INFO]initializing nodejs runner environment...
docker-sandbox-1     | 2024/05/22 06:32:19 nodejs.go:91: [INFO]nodejs runner environment initialized
docker-web-1         |   _\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
docker-web-1         |    _\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-nginx-1       | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
docker-nginx-1       | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
docker-sandbox-1     | 2024/05/22 06:32:19 setup.go:22: [INFO]initializing python runner environment...
docker-sandbox-1     | 2024/05/22 06:32:19 setup.go:35: [INFO]python runner environment initialized
docker-sandbox-1     | 2024/05/22 06:32:19 config.go:86: [INFO]network has been enabled
docker-web-1         |     _\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
docker-worker-1      | 
docker-worker-1      | Please specify a different user using the --uid option.
docker-nginx-1       | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
docker-nginx-1       | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-worker-1      | 
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Processing Configuration File: /etc/squid/squid.conf (depth 0)
docker-web-1         |      _\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
docker-web-1         |       _\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
docker-web-1         |        _\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
docker-web-1         |         _\///______________\///______________\///__\///////////////__
docker-web-1         | 
docker-worker-1      | User information: uid=0 euid=0 gid=0 egid=0
docker-nginx-1       | /docker-entrypoint.sh: Configuration complete; ready for start up
docker-worker-1      | 
docker-worker-1      |   warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
docker-worker-1      |  
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: because of this '::/0' is ignored to keep splay tree searching predictable
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: You should probably remove '::/0' from the ACL named 'all'
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: using the "epoll" event method
docker-web-1         | 
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: nginx/1.25.5
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14) 
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: OS: Linux 5.4.0-152-generic
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
docker-sandbox-1     | 2024/05/22 06:32:19 config.go:102: [INFO]using https proxy: http://ssrf_proxy:3128
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Created PID file (/run/squid.pid)
docker-sandbox-1     | 2024/05/22 06:32:19 config.go:111: [INFO]using http proxy: http://ssrf_proxy:3128
docker-sandbox-1     | 2024/05/22 06:32:19 server.go:19: [INFO]config init success
docker-sandbox-1     | 2024/05/22 06:32:19 server.go:25: [INFO]runner dependencies init success
docker-sandbox-1     | 2024/05/22 06:32:19 server.go:42: [INFO]installing python dependencies...
docker-sandbox-1     | 2024/05/22 06:32:19 server.go:48: [INFO]python dependencies installed
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Set Current Directory to /var/spool/squid
docker-sandbox-1     | 2024/05/22 06:32:19 cocrrent.go:31: [INFO]setting max requests to 50
docker-worker-1      |  -------------- celery@0d4cbcfe6432 v5.3.6 (emerald-rush)
docker-sandbox-1     | 2024/05/22 06:32:19 cocrrent.go:13: [INFO]setting max workers to 4
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Creating missing swap directories
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| No cache_dir stores are configured.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Removing PID file (/run/squid.pid)
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-web-1         |                           Runtime Edition
docker-web-1         | 
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Processing Configuration File: /etc/squid/squid.conf (depth 0)
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: because of this '::/0' is ignored to keep splay tree searching predictable
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: You should probably remove '::/0' from the ACL named 'all'
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Created PID file (/run/squid.pid)
docker-worker-1      | --- ***** ----- 
docker-web-1         |         PM2 is a Production Process Manager for Node.js applications
docker-worker-1      | -- ******* ---- Linux-5.4.0-152-generic-x86_64-with-glibc2.36 2024-05-22 06:32:25
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker processes
docker-worker-1      | - *** --- * --- 
docker-worker-1      | - ** ---------- [config]
docker-worker-1      | - ** ---------- .> app:         app:0x7fa03393d060
docker-worker-1      | - ** ---------- .> transport:   redis://:**@redis:6379/1
docker-worker-1      | - ** ---------- .> results:     postgresql://postgres:**@db:5432/dify
docker-sandbox-1     | [GIN] 2024/05/22 - 06:32:49 | 401 |       4.233µs |              :: | GET      "/squid-internal-dynamic/netdb"
docker-worker-1      | - *** --- * --- .> concurrency: 1 (gevent)
docker-worker-1      | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
docker-worker-1      | --- ***** ----- 
docker-worker-1      |  -------------- [queues]
docker-worker-1      |                 .> dataset          exchange=dataset(direct) key=dataset
docker-worker-1      |                 .> generation       exchange=generation(direct) key=generation
docker-web-1         |                      with a built-in Load Balancer.
docker-web-1         | 
docker-web-1         |                 Start and Daemonize any application:
docker-web-1         |                 $ pm2 start app.js
docker-web-1         | 
docker-web-1         |                 Load Balance 4 instances of api.js:
docker-web-1         |                 $ pm2 start api.js -i 4
docker-web-1         | 
docker-worker-1      |                 .> mail             exchange=mail(direct) key=mail
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 28
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 29
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 30
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 31
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 32
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 33
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 34
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Set Current Directory to /var/spool/squid
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Creating missing swap directories
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| No cache_dir stores are configured.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Removing PID file (/run/squid.pid)
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Processing Configuration File: /etc/squid/squid.conf (depth 0)
docker-worker-1      | 
docker-worker-1      | [tasks]
docker-worker-1      |   . schedule.clean_embedding_cache_task.clean_embedding_cache_task
docker-worker-1      |   . schedule.clean_unused_datasets_task.clean_unused_datasets_task
docker-worker-1      |   . tasks.add_document_to_index_task.add_document_to_index_task
docker-worker-1      |   . tasks.annotation.add_annotation_to_index_task.add_annotation_to_index_task
docker-worker-1      |   . tasks.annotation.batch_import_annotations_task.batch_import_annotations_task
docker-worker-1      |   . tasks.annotation.delete_annotation_index_task.delete_annotation_index_task
docker-worker-1      |   . tasks.annotation.disable_annotation_reply_task.disable_annotation_reply_task
docker-worker-1      |   . tasks.annotation.enable_annotation_reply_task.enable_annotation_reply_task
docker-worker-1      |   . tasks.annotation.update_annotation_to_index_task.update_annotation_to_index_task
docker-worker-1      |   . tasks.batch_create_segment_to_index_task.batch_create_segment_to_index_task
docker-worker-1      |   . tasks.clean_dataset_task.clean_dataset_task
docker-worker-1      |   . tasks.clean_document_task.clean_document_task
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 35
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| aclIpParseIpData: IPv6 has not been enabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: (B) '::/0' is a subnetwork of (A) '::/0'
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: because of this '::/0' is ignored to keep splay tree searching predictable
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| WARNING: You should probably remove '::/0' from the ACL named 'all'
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Created PID file (/run/squid.pid)
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Set Current Directory to /var/spool/squid
docker-web-1         |                 Monitor in production:
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Starting Squid Cache version 6.1 for x86_64-pc-linux-gnu...
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 36
docker-web-1         |                 $ pm2 monitor
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 37
docker-web-1         | 
docker-worker-1      |   . tasks.clean_notion_document_task.clean_notion_document_task
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 38
docker-web-1         |                 Make pm2 auto-boot at server restart:
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Service Name: squid
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 39
docker-worker-1      |   . tasks.deal_dataset_vector_index_task.deal_dataset_vector_index_task
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 40
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 41
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 42
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 43
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 44
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 45
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 46
docker-nginx-1       | 2024/05/22 06:32:21 [notice] 1#1: start worker process 47
docker-worker-1      |   . tasks.delete_segment_from_index_task.delete_segment_from_index_task
docker-worker-1      |   . tasks.disable_segment_from_index_task.disable_segment_from_index_task
docker-worker-1      |   . tasks.document_indexing_sync_task.document_indexing_sync_task
docker-worker-1      |   . tasks.document_indexing_task.document_indexing_task
docker-worker-1      |   . tasks.document_indexing_update_task.document_indexing_update_task
docker-worker-1      |   . tasks.duplicate_document_indexing_task.duplicate_document_indexing_task
docker-worker-1      |   . tasks.enable_segment_to_index_task.enable_segment_to_index_task
docker-worker-1      |   . tasks.mail_invite_member_task.send_invite_member_mail_task
docker-worker-1      |   . tasks.recover_document_indexing_task.recover_document_indexing_task
docker-worker-1      |   . tasks.remove_document_from_index_task.remove_document_from_index_task
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Process ID 40
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Process Roles: master worker
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| With 1048576 file descriptors available
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Initializing IP Cache...
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| DNS IPv4 socket created at 0.0.0.0, FD 8
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Adding nameserver 127.0.0.11 from /etc/resolv.conf
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Adding ndots 1 from /etc/resolv.conf
docker-worker-1      |   . tasks.retry_document_indexing_task.retry_document_indexing_task
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Logfile: opening log daemon:/var/log/squid/access.log
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Logfile Daemon: opening log /var/log/squid/access.log
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Store logging disabled
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Swap maxSize 0 + 262144 KB, estimated 20164 objects
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Target number of buckets: 1008
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Using 8192 Store buckets
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Max Mem  size: 262144 KB
docker-worker-1      | 
docker-web-1         |                 $ pm2 startup
docker-web-1         | 
docker-web-1         |                 To go further checkout:
docker-web-1         |                 http://pm2.io/
docker-web-1         | 
docker-web-1         | 
docker-web-1         |                         -------------
docker-web-1         | 
docker-web-1         | pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
docker-web-1         | 2024-05-22T06:32:20: PM2 log: Launching in no daemon mode
docker-web-1         | 2024-05-22T06:32:20: PM2 log: [PM2][WARN] Applications dify-web not running, starting...
docker-web-1         | 2024-05-22T06:32:20: PM2 log: App [dify-web:0] starting in -cluster mode-
docker-web-1         | 2024-05-22T06:32:20: PM2 log: App [dify-web:0] online
docker-web-1         | 2024-05-22T06:32:20: PM2 log: App [dify-web:1] starting in -cluster mode-
docker-web-1         | 2024-05-22T06:32:20: PM2 log: App [dify-web:1] online
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Max Swap size: 0 KB
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Using Least Load store dir selection
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Set Current Directory to /var/spool/squid
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Finished loading MIME types and icons.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| HTCP Disabled.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Pinger socket opened on FD 14
docker-web-1         | 2024-05-22T06:32:20: PM2 log: [PM2] App [dify-web] launched (2 instances)
docker-web-1         | 2024-05-22T06:32:20: PM2 log: ┌────┬─────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
docker-web-1         | │ id │ name        │ namespace   │ version │ mode    │ pid      │ uptime │ ↺    │ status    │ cpu      │ mem      │ user     │ watching │
docker-web-1         | ├────┼─────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
docker-web-1         | │ 0  │ dify-web    │ default     │ 0.6.8   │ cluster │ 18       │ 0s     │ 0    │ online    │ 0%       │ 51.5mb   │ root     │ disabled │
docker-web-1         | │ 1  │ dify-web    │ default     │ 0.6.8   │ cluster │ 25       │ 0s     │ 0    │ online    │ 0%       │ 42.6mb   │ root     │ disabled │
docker-web-1         | └────┴─────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
docker-web-1         | 2024-05-22T06:32:20: PM2 log: [--no-daemon] Continue to stream logs
docker-web-1         | 2024-05-22T06:32:20: PM2 log: [--no-daemon] Exit on target PM2 exit pid=7
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Squid plugin modules loaded: 0
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Adaptation support is off.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Accepting HTTP Socket connections at conn2 local=0.0.0.0:3128 remote=[::] FD 11 flags=9
docker-ssrf_proxy-1  |     listening port: 3128
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Accepting reverse-proxy HTTP Socket connections at conn4 local=0.0.0.0:8194 remote=[::] FD 12 flags=9
docker-ssrf_proxy-1  |     listening port: 8194
docker-ssrf_proxy-1  | 2024/05/22 06:32:20| Configuring Parent sandbox
docker-web-1         | 06:32:20 0|dify-web  |    ▲ Next.js 14.1.0
docker-worker-1      | [2024-05-22 06:32:25,090: INFO/MainProcess] Connected to redis://:**@redis:6379/1
docker-web-1         | 06:32:20 0|dify-web  |    - Local:        http://6ce7543c0b30:3000
docker-worker-1      | [2024-05-22 06:32:25,092: INFO/MainProcess] mingle: searching for neighbors
docker-worker-1      | [2024-05-22 06:32:26,098: INFO/MainProcess] mingle: all alone
docker-worker-1      | [2024-05-22 06:32:26,106: INFO/MainProcess] pidbox: Connected to redis://:**@redis:6379/1.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20 pinger| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20 pinger| Initialising ICMP pinger ...
docker-ssrf_proxy-1  | 2024/05/22 06:32:20 pinger| ICMP socket opened.
docker-ssrf_proxy-1  | 2024/05/22 06:32:20 pinger| ICMPv6 socket opened
docker-worker-1      | [2024-05-22 06:32:26,107: INFO/MainProcess] celery@0d4cbcfe6432 ready.
docker-ssrf_proxy-1  | 2024/05/22 06:32:21| storeLateRelease: released 0 objects
docker-web-1         | 06:32:20 0|dify-web  |    - Network:      http://192.168.144.6:3000
docker-web-1         | 06:32:20 0|dify-web  |  ✓ Ready in 45ms
docker-web-1         | 06:32:20 1|dify-web  |    ▲ Next.js 14.1.0
docker-web-1         | 06:32:20 1|dify-web  |    - Local:        http://6ce7543c0b30:3000
docker-web-1         | 06:32:20 1|dify-web  |    - Network:      http://192.168.144.6:3000
docker-web-1         | 06:32:20 1|dify-web  |  ✓ Ready in 61ms

I would greatly appreciate your assistance in resolving this matter!

goldeneave avatar May 22 '24 06:05 goldeneave

Can you attach the docker-compose.yaml? It seems you have missed some env vars.

crazywoola avatar May 29 '24 11:05 crazywoola

certainly, I almost keep it, and did no modification:

services:
  # API service
  api:
    image: langgenius/dify-api:0.6.8
    restart: always
    environment:
      # Startup mode, 'api' starts the API server.
      MODE: api
      # The log level for the application. Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
      LOG_LEVEL: INFO
      # enable DEBUG mode to output more logs
      # DEBUG : true
      # A secret key that is used for securely signing the session cookie and encrypting sensitive information on the database. You can generate a strong key using `openssl rand -base64 42`.
      SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
      # The base URL of console application web frontend, refers to the Console base URL of WEB service if console domain is
      # different from api or web app domain.
      # example: http://cloud.dify.ai
      CONSOLE_WEB_URL: ''
      # Password for admin user initialization.
      # If left unset, admin user will not be prompted for a password when creating the initial admin account.
      INIT_PASSWORD: ''
      # The base URL of console application api server, refers to the Console base URL of WEB service if console domain is
      # different from api or web app domain.
      # example: http://cloud.dify.ai
      CONSOLE_API_URL: ''
      # The URL prefix for Service API endpoints, refers to the base URL of the current API service if api domain is
      # different from console domain.
      # example: http://api.dify.ai
      SERVICE_API_URL: ''
      # The URL prefix for Web APP frontend, refers to the Web App base URL of WEB service if web app domain is different from
      # console or api domain.
      # example: http://udify.app
      APP_WEB_URL: ''
      # File preview or download Url prefix.
      # used to display File preview or download Url to the front-end or as Multi-model inputs;
      # Url is signed and has expiration time.
      FILES_URL: ''
      # When enabled, migrations will be executed prior to application startup and the application will start after the migrations have completed.
      MIGRATION_ENABLED: 'true'
      # The configurations of postgres database connection.
      # It is consistent with the configuration in the 'db' service below.
      DB_USERNAME: postgres
      DB_PASSWORD: difyai123456
      DB_HOST: db
      DB_PORT: 5432
      DB_DATABASE: dify
      # The configurations of redis connection.
      # It is consistent with the configuration in the 'redis' service below.
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_USERNAME: ''
      REDIS_PASSWORD: difyai123456
      REDIS_USE_SSL: 'false'
      # use redis db 0 for redis cache
      REDIS_DB: 0
      # The configurations of celery broker.
      # Use redis as the broker, and redis db 1 for celery broker.
      CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
      # Specifies the allowed origins for cross-origin requests to the Web API, e.g. https://dify.app or * for all origins.
      WEB_API_CORS_ALLOW_ORIGINS: '*'
      # Specifies the allowed origins for cross-origin requests to the console API, e.g. https://cloud.dify.ai or * for all origins.
      CONSOLE_CORS_ALLOW_ORIGINS: '*'
      # CSRF Cookie settings
      # Controls whether a cookie is sent with cross-site requests,
      # providing some protection against cross-site request forgery attacks
      #
      # Default: `SameSite=Lax, Secure=false, HttpOnly=true`
      # This default configuration supports same-origin requests using either HTTP or HTTPS,
      # but does not support cross-origin requests. It is suitable for local debugging purposes.
      #
      # If you want to enable cross-origin support,
      # you must use the HTTPS protocol and set the configuration to `SameSite=None, Secure=true, HttpOnly=true`.
      #
      # The type of storage to use for storing user files. Supported values are `local` and `s3` and `azure-blob` and `google-storage`, Default: `local`
      STORAGE_TYPE: local
      # The path to the local storage directory, the directory relative the root path of API service codes or absolute path. Default: `storage` or `/home/john/storage`.
      # only available when STORAGE_TYPE is `local`.
      STORAGE_LOCAL_PATH: storage
      # The S3 storage configurations, only available when STORAGE_TYPE is `s3`.
      S3_ENDPOINT: 'https://xxx.r2.cloudflarestorage.com'
      S3_BUCKET_NAME: 'difyai'
      S3_ACCESS_KEY: 'ak-difyai'
      S3_SECRET_KEY: 'sk-difyai'
      S3_REGION: 'us-east-1'
      # The Azure Blob storage configurations, only available when STORAGE_TYPE is `azure-blob`.
      AZURE_BLOB_ACCOUNT_NAME: 'difyai'
      AZURE_BLOB_ACCOUNT_KEY: 'difyai'
      AZURE_BLOB_CONTAINER_NAME: 'difyai-container'
      AZURE_BLOB_ACCOUNT_URL: 'https://<your_account_name>.blob.core.windows.net'
      # The Google storage configurations, only available when STORAGE_TYPE is `google-storage`.
      GOOGLE_STORAGE_BUCKET_NAME: 'yout-bucket-name'
      # if you want to use Application Default Credentials, you can leave GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64 empty.
      GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: 'your-google-service-account-json-base64-string'
      # The type of vector store to use. Supported values are `weaviate`, `qdrant`, `milvus`, `relyt`.
      VECTOR_STORE: weaviate
      # The Weaviate endpoint URL. Only available when VECTOR_STORE is `weaviate`.
      WEAVIATE_ENDPOINT: http://weaviate:8080
      # The Weaviate API key.
      WEAVIATE_API_KEY: WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
      # The Qdrant endpoint URL. Only available when VECTOR_STORE is `qdrant`.
      QDRANT_URL: http://qdrant:6333
      # The Qdrant API key.
      QDRANT_API_KEY: difyai123456
      # The Qdrant client timeout setting.
      QDRANT_CLIENT_TIMEOUT: 20
      # The Qdrant client enable gRPC mode.
      QDRANT_GRPC_ENABLED: 'false'
      # The Qdrant server gRPC mode PORT.
      QDRANT_GRPC_PORT: 6334
      # Milvus configuration Only available when VECTOR_STORE is `milvus`.
      # The milvus host.
      MILVUS_HOST: 127.0.0.1
      # The milvus host.
      MILVUS_PORT: 19530
      # The milvus username.
      MILVUS_USER: root
      # The milvus password.
      MILVUS_PASSWORD: Milvus
      # The milvus tls switch.
      MILVUS_SECURE: 'false'
      # relyt configurations
      RELYT_HOST: db
      RELYT_PORT: 5432
      RELYT_USER: postgres
      RELYT_PASSWORD: difyai123456
      RELYT_DATABASE: postgres
      # pgvector configurations
      PGVECTOR_HOST: pgvector
      PGVECTOR_PORT: 5432
      PGVECTOR_USER: postgres
      PGVECTOR_PASSWORD: difyai123456
      PGVECTOR_DATABASE: dify
      # Mail configuration, support: resend, smtp
      MAIL_TYPE: ''
      # default send from email address, if not specified
      MAIL_DEFAULT_SEND_FROM: 'YOUR EMAIL FROM (eg: no-reply <[email protected]>)'
      SMTP_SERVER: ''
      SMTP_PORT: 587
      SMTP_USERNAME: ''
      SMTP_PASSWORD: ''
      SMTP_USE_TLS: 'true'
      # the api-key for resend (https://resend.com)
      RESEND_API_KEY: ''
      RESEND_API_URL: https://api.resend.com
      # The DSN for Sentry error reporting. If not set, Sentry error reporting will be disabled.
      SENTRY_DSN: ''
      # The sample rate for Sentry events. Default: `1.0`
      SENTRY_TRACES_SAMPLE_RATE: 1.0
      # The sample rate for Sentry profiles. Default: `1.0`
      SENTRY_PROFILES_SAMPLE_RATE: 1.0
      # Notion import configuration, support public and internal
      NOTION_INTEGRATION_TYPE: public
      NOTION_CLIENT_SECRET: you-client-secret
      NOTION_CLIENT_ID: you-client-id
      NOTION_INTERNAL_SECRET: you-internal-secret
      # The sandbox service endpoint.
      CODE_EXECUTION_ENDPOINT: "http://sandbox:8194"
      CODE_EXECUTION_API_KEY: dify-sandbox
      CODE_MAX_NUMBER: 9223372036854775807
      CODE_MIN_NUMBER: -9223372036854775808
      CODE_MAX_STRING_LENGTH: 80000
      TEMPLATE_TRANSFORM_MAX_LENGTH: 80000
      CODE_MAX_STRING_ARRAY_LENGTH: 30
      CODE_MAX_OBJECT_ARRAY_LENGTH: 30
      CODE_MAX_NUMBER_ARRAY_LENGTH: 1000
      # SSRF Proxy server
      SSRF_PROXY_HTTP_URL: 'http://ssrf_proxy:3128'
      SSRF_PROXY_HTTPS_URL: 'http://ssrf_proxy:3128'
      # Indexing configuration
      INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: 1000
    depends_on:
      - db
      - redis
    volumes:
      # Mount the storage directory to the container, for storing user files.
      - ./volumes/app/storage:/app/api/storage
    # uncomment to expose dify-api port to host
    # ports:
    #   - "5001:5001"
    networks:
      - ssrf_proxy_network
      - default

  # worker service
  # The Celery worker for processing the queue.
  worker:
    image: langgenius/dify-api:0.6.8
    restart: always
    environment:
      CONSOLE_WEB_URL: ''
      # Startup mode, 'worker' starts the Celery worker for processing the queue.
      MODE: worker

      # --- All the configurations below are the same as those in the 'api' service. ---

      # The log level for the application. Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
      LOG_LEVEL: INFO
      # A secret key that is used for securely signing the session cookie and encrypting sensitive information on the database. You can generate a strong key using `openssl rand -base64 42`.
      # same as the API service
      SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
      # The configurations of postgres database connection.
      # It is consistent with the configuration in the 'db' service below.
      DB_USERNAME: postgres
      DB_PASSWORD: difyai123456
      DB_HOST: db
      DB_PORT: 5432
      DB_DATABASE: dify
      # The configurations of redis cache connection.
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_USERNAME: ''
      REDIS_PASSWORD: difyai123456
      REDIS_DB: 0
      REDIS_USE_SSL: 'false'
      # The configurations of celery broker.
      CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
      # The type of storage to use for storing user files. Supported values are `local` and `s3` and `azure-blob` and `google-storage`, Default: `local`
      STORAGE_TYPE: local
      STORAGE_LOCAL_PATH: storage
      # The S3 storage configurations, only available when STORAGE_TYPE is `s3`.
      S3_ENDPOINT: 'https://xxx.r2.cloudflarestorage.com'
      S3_BUCKET_NAME: 'difyai'
      S3_ACCESS_KEY: 'ak-difyai'
      S3_SECRET_KEY: 'sk-difyai'
      S3_REGION: 'us-east-1'
      # The Azure Blob storage configurations, only available when STORAGE_TYPE is `azure-blob`.
      AZURE_BLOB_ACCOUNT_NAME: 'difyai'
      AZURE_BLOB_ACCOUNT_KEY: 'difyai'
      AZURE_BLOB_CONTAINER_NAME: 'difyai-container'
      AZURE_BLOB_ACCOUNT_URL: 'https://<your_account_name>.blob.core.windows.net'
      # The Google storage configurations, only available when STORAGE_TYPE is `google-storage`.
      GOOGLE_STORAGE_BUCKET_NAME: 'yout-bucket-name'
      # if you want to use Application Default Credentials, you can leave GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64 empty.
      GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: 'your-google-service-account-json-base64-string'
      # The type of vector store to use. Supported values are `weaviate`, `qdrant`, `milvus`, `relyt`, `pgvector`.
      VECTOR_STORE: weaviate
      # The Weaviate endpoint URL. Only available when VECTOR_STORE is `weaviate`.
      WEAVIATE_ENDPOINT: http://weaviate:8080
      # The Weaviate API key.
      WEAVIATE_API_KEY: WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
      # The Qdrant endpoint URL. Only available when VECTOR_STORE is `qdrant`.
      QDRANT_URL: http://qdrant:6333
      # The Qdrant API key.
      QDRANT_API_KEY: difyai123456
      # The Qdrant client timeout setting.
      QDRANT_CLIENT_TIMEOUT: 20
      # The Qdrant client enable gRPC mode.
      QDRANT_GRPC_ENABLED: 'false'
      # The Qdrant server gRPC mode PORT.
      QDRANT_GRPC_PORT: 6334
      # Milvus configuration Only available when VECTOR_STORE is `milvus`.
      # The milvus host.
      MILVUS_HOST: 127.0.0.1
      # The milvus host.
      MILVUS_PORT: 19530
      # The milvus username.
      MILVUS_USER: root
      # The milvus password.
      MILVUS_PASSWORD: Milvus
      # The milvus tls switch.
      MILVUS_SECURE: 'false'
      # Mail configuration, support: resend
      MAIL_TYPE: ''
      # default send from email address, if not specified
      MAIL_DEFAULT_SEND_FROM: 'YOUR EMAIL FROM (eg: no-reply <[email protected]>)'
      SMTP_SERVER: ''
      SMTP_PORT: 587
      SMTP_USERNAME: ''
      SMTP_PASSWORD: ''
      SMTP_USE_TLS: 'true'
      # the api-key for resend (https://resend.com)
      RESEND_API_KEY: ''
      RESEND_API_URL: https://api.resend.com
      # relyt configurations
      RELYT_HOST: db
      RELYT_PORT: 5432
      RELYT_USER: postgres
      RELYT_PASSWORD: difyai123456
      RELYT_DATABASE: postgres
      # pgvector configurations
      PGVECTOR_HOST: pgvector
      PGVECTOR_PORT: 5432
      PGVECTOR_USER: postgres
      PGVECTOR_PASSWORD: difyai123456
      PGVECTOR_DATABASE: dify
      # Notion import configuration, support public and internal
      NOTION_INTEGRATION_TYPE: public
      NOTION_CLIENT_SECRET: you-client-secret
      NOTION_CLIENT_ID: you-client-id
      NOTION_INTERNAL_SECRET: you-internal-secret
      # Indexing configuration
      INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: 1000
    depends_on:
      - db
      - redis
    volumes:
      # Mount the storage directory to the container, for storing user files.
      - ./volumes/app/storage:/app/api/storage
    networks:
      - ssrf_proxy_network
      - default

  # Frontend web application.
  web:
    image: langgenius/dify-web:0.6.8
    restart: always
    environment:
      # The base URL of console application api server, refers to the Console base URL of WEB service if console domain is
      # different from api or web app domain.
      # example: http://cloud.dify.ai
      CONSOLE_API_URL: ''
      # The URL for Web APP api server, refers to the Web App base URL of WEB service if web app domain is different from
      # console or api domain.
      # example: http://udify.app
      APP_API_URL: ''
      # The DSN for Sentry error reporting. If not set, Sentry error reporting will be disabled.
      SENTRY_DSN: ''
    # uncomment to expose dify-web port to host
    ports:
      - "3000:3000"

  # The postgres database.
  db:
    image: postgres:15-alpine
    restart: always
    environment:
      PGUSER: postgres
      # The password for the default postgres user.
      POSTGRES_PASSWORD: difyai123456
      # The name of the default postgres database.
      POSTGRES_DB: dify
      # postgres data directory
      PGDATA: /var/lib/postgresql/data/pgdata
    volumes:
      - ./volumes/db/data:/var/lib/postgresql/data
    # uncomment to expose db(postgresql) port to host
    # ports:
    #   - "5432:5432"
    healthcheck:
      test: [ "CMD", "pg_isready" ]
      interval: 1s
      timeout: 3s
      retries: 30

  # The redis cache.
  redis:
    image: redis:6-alpine
    restart: always
    volumes:
      # Mount the redis data directory to the container.
      - ./volumes/redis/data:/data
    # Set the redis password when startup redis server.
    command: redis-server --requirepass difyai123456
    healthcheck:
      test: [ "CMD", "redis-cli", "ping" ]
    # uncomment to expose redis port to host
    # ports:
    #   - "6379:6379"

  # The Weaviate vector store.
  weaviate:
    image: semitechnologies/weaviate:1.19.0
    restart: always
    volumes:
      # Mount the Weaviate data directory to the container.
      - ./volumes/weaviate:/var/lib/weaviate
    environment:
      # The Weaviate configurations
      # You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information.
      QUERY_DEFAULTS_LIMIT: 25
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'false'
      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
      DEFAULT_VECTORIZER_MODULE: 'none'
      CLUSTER_HOSTNAME: 'node1'
      AUTHENTICATION_APIKEY_ENABLED: 'true'
      AUTHENTICATION_APIKEY_ALLOWED_KEYS: 'WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih'
      AUTHENTICATION_APIKEY_USERS: '[email protected]'
      AUTHORIZATION_ADMINLIST_ENABLED: 'true'
      AUTHORIZATION_ADMINLIST_USERS: '[email protected]'
    # uncomment to expose weaviate port to host
    # ports:
    #  - "8080:8080"

  # The DifySandbox
  sandbox:
    image: langgenius/dify-sandbox:0.2.0
    restart: always
    environment:
      # The DifySandbox configurations
      # Make sure you are changing this key for your deployment with a strong key.
      # You can generate a strong key using `openssl rand -base64 42`.
      API_KEY: dify-sandbox
      GIN_MODE: 'release'
      WORKER_TIMEOUT: 15
      ENABLE_NETWORK: 'true'
      HTTP_PROXY: 'http://ssrf_proxy:3128'
      HTTPS_PROXY: 'http://ssrf_proxy:3128'
    volumes:
      - ./volumes/sandbox/dependencies:/dependencies
    networks:
      - ssrf_proxy_network

  # ssrf_proxy server
  # for more information, please refer to
  # https://docs.dify.ai/getting-started/install-self-hosted/install-faq#id-16.-why-is-ssrf_proxy-needed
  ssrf_proxy:
    image: ubuntu/squid:latest
    restart: always
    volumes:
      # pls clearly modify the squid.conf file to fit your network environment.
      - ./volumes/ssrf_proxy/squid.conf:/etc/squid/squid.conf
    networks:
      - ssrf_proxy_network
      - default
  # Qdrant vector store.
  # uncomment to use qdrant as vector store.
  # (if uncommented, you need to comment out the weaviate service above,
  # and set VECTOR_STORE to qdrant in the api & worker service.)
  # qdrant:
  #   image: langgenius/qdrant:v1.7.3
  #   restart: always
  #   volumes:
  #     - ./volumes/qdrant:/qdrant/storage
  #   environment:
  #     QDRANT_API_KEY: 'difyai123456'
  #   # uncomment to expose qdrant port to host
  #   # ports:
  #   #  - "6333:6333"
  #   #  - "6334:6334"

  # The pgvector vector database.
  # Uncomment to use qdrant as vector store.
  # pgvector:
  #   image: pgvector/pgvector:pg16
  #   restart: always
  #   environment:
  #     PGUSER: postgres
  #     # The password for the default postgres user.
  #     POSTGRES_PASSWORD: difyai123456
  #     # The name of the default postgres database.
  #     POSTGRES_DB: dify
  #     # postgres data directory
  #     PGDATA: /var/lib/postgresql/data/pgdata
  #   volumes:
  #     - ./volumes/pgvector/data:/var/lib/postgresql/data
  #   # uncomment to expose db(postgresql) port to host
  #   # ports:
  #   #   - "5433:5432"
  #   healthcheck:
  #     test: [ "CMD", "pg_isready" ]
  #     interval: 1s
  #     timeout: 3s
  #     retries: 30


  # The nginx reverse proxy.
  # used for reverse proxying the API service and Web service.
  nginx:
    image: nginx:latest
    restart: always
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/proxy.conf:/etc/nginx/proxy.conf
      - ./nginx/conf.d:/etc/nginx/conf.d
      #- ./nginx/ssl:/etc/ssl
    depends_on:
      - api
      - web
    ports:
      - "80:80"
      #- "443:443"
networks:
  # create a network between sandbox, api and ssrf_proxy, and can not access outside.
  ssrf_proxy_network:
    driver: bridge
    internal: true

goldeneave avatar May 29 '24 11:05 goldeneave

I've been trying to deploy dify using the instructions provided in the documentation, specifically using the local deployment approach combined with the docker-compose.middleware.yaml file. However, I've encountered some issues that I've been unable to resolve on my own. I also check the issue history and found that the issue seems not only specific to my case, as I've noticed several other users reporting similar problems. Any guidance or additional troubleshooting steps would be greatly appreciated!

goldeneave avatar May 29 '24 13:05 goldeneave

Please press F12 in the browser to check if the API-related services are showing a 502 error. I encountered a similar issue, and now I have traced the problem to a network disconnection between the API service and the DB service. However, from the host machine, it is possible to connect to the DB, but not from within the API container. I also recommend entering the container and installing nmap to investigate whether it is the same issue. image

4blacktea avatar Jun 14 '24 11:06 4blacktea

谢谢老哥,清除了docker里面的一些文件夹以后也解决了

goldeneave avatar Jun 14 '24 11:06 goldeneave

Hi which folders did you remove? I have the exact same error and I can't figure out how to resolve it.

zanderjiang avatar Jun 28 '24 08:06 zanderjiang

rm -rf ./volumes/app ./volumes/db ./volumes/redis ./volumes/weaviate I remove these folders, and restart docker, it solved my problem

goldeneave avatar Jun 28 '24 08:06 goldeneave

rm -rf ./volumes/app ./volumes/db ./volumes/redis ./volumes/weaviate I remove these folders, and restart docker, it solved my problem

感谢,不过不是这个问题,我的是5001的port返还一个404 error

zanderjiang avatar Jun 28 '24 08:06 zanderjiang