dokploy icon indicating copy to clipboard operation
dokploy copied to clipboard

dokploy-postgres volume corruption after repeated dokploy restarts

Open NemurenaiDev opened this issue 4 months ago • 11 comments

To Reproduce

  1. Deploy dokploy (v0.24.4 - v0.24.11) on a fresh VPS (Ubuntu 24.04, Hostinger KVM)
  2. Run several docker compose apps (NestJS, Vite React, small bun apps)
  3. Wait couple of days - dokploy restarts itself with "ELIFECYCLE  Command failed" error
  4. After ~N restarts, dokploy-postgres container fails with: "PANIC: could not locate a valid checkpoint record"
  5. Dokploy cannot start again until postgres volume is dropped and restored

Current vs. Expected behavior

Expected behavior:

  • Dokploy should not restart spontaneously.
  • Postgres volumes should not get corrupted from restarts.

Provide environment information

VPS Provider: Hostinger

OS: Ubuntu 24.04.2 LTS x86_64 
Host: KVM/QEMU (Standard PC (i440FX + PIIX, 1996) pc-i440fx-9.2) 
Kernel: 6.8.0-78-generic 
Uptime: 1 day, 18 hours, 58 mins 
Packages: 810 (dpkg) 
Shell: fish 3.7.0 
Terminal: /dev/pts/0 
CPU: AMD EPYC 7543P (2) @ 2.794GHz 
Memory: 3067MiB / 7941MiB 

Apps were running: 
 - docker compose:
    - Nestjs + Vite React App (ghcr images)
    - Multiple tiny typescript app on oven/bun image

Which area(s) are affected? (Select all that apply)

Installation

Are you deploying the applications where Dokploy is installed or on a remote server?

Same server where Dokploy is installed

Additional context

Dokploy restart notifications:

, [8/10/25 1:02 PM] ✅ Dokploy Server Restarted

Date: Aug 10, 2025 Time: 10:02:19 AM

, [8/11/25 12:00 PM] ✅ Dokploy Server Restarted

Date: Aug 11, 2025 Time: 9:00:57 AM

, [8/12/25 10:55 AM] ✅ Dokploy Server Restarted

Date: Aug 12, 2025 Time: 7:55:31 AM

, [8/13/25 5:45 AM] ✅ Dokploy Server Restarted

Date: Aug 13, 2025 Time: 2:45:15 AM

, [8/14/25 6:17 AM] ✅ Dokploy Server Restarted

Date: Aug 14, 2025 Time: 3:17:18 AM

, [8/16/25 6:27 PM] ✅ Dokploy Server Restarted

Date: Aug 16, 2025 Time: 3:27:41 PM

, [8/17/25 5:28 AM] ✅ Dokploy Server Restarted

Date: Aug 17, 2025 Time: 2:28:27 AM

, [8/17/25 4:25 PM] ✅ Dokploy Server Restarted

Date: Aug 17, 2025 Time: 1:25:34 PM

, [8/18/25 3:24 AM] ✅ Dokploy Server Restarted

Date: Aug 18, 2025 Time: 12:24:13 AM

, [8/18/25 11:49 AM] ✅ Dokploy Server Restarted

Date: Aug 18, 2025 Time: 8:49:34 AM

, [8/18/25 9:39 PM] ✅ Dokploy Server Restarted

Date: Aug 18, 2025 Time: 6:39:06 PM

, [8/18/25 9:39 PM] ✅ Dokploy Server Restarted

Date: Aug 18, 2025 Time: 6:39:16 PM

, [8/20/25 7:10 AM] ✅ Dokploy Server Restarted

Date: Aug 20, 2025 Time: 4:10:26 AM

, [8/20/25 7:19 AM] ✅ Dokploy Server Restarted

Date: Aug 20, 2025 Time: 4:19:05 AM

, [8/20/25 7:01 PM] ✅ Dokploy Server Restarted

Date: Aug 20, 2025 Time: 4:01:08 PM

, [8/20/25 7:02 PM] ✅ Dokploy Server Restarted

Date: Aug 20, 2025 Time: 4:02:24 PM

, [8/21/25 5:44 AM] ✅ Dokploy Server Restarted

Date: Aug 21, 2025 Time: 2:44:55 AM

, [8/21/25 5:56 AM] ✅ Dokploy Server Restarted

Date: Aug 21, 2025 Time: 2:56:41 AM

, [8/21/25 6:04 AM] ✅ Dokploy Server Restarted

Date: Aug 21, 2025 Time: 3:04:57 AM

, [8/21/25 6:03 PM] ✅ Dokploy Server Restarted

Date: Aug 21, 2025 Time: 3:03:51 PM

, [8/22/25 3:31 AM] ✅ Dokploy Server Restarted

Date: Aug 22, 2025 Time: 12:31:43 AM

, [8/22/25 3:35 AM] ✅ Dokploy Server Restarted

Date: Aug 22, 2025 Time: 12:35:25 AM

Docker logs:

root@server ~# docker service ls ID NAME MODE REPLICAS IMAGE PORTS nx4su4h1evfe dokploy replicated 1/1 dokploy/dokploy:latest
uan1qt2jeabq dokploy-postgres replicated 0/1 postgres:16
q2izjm6duuj8 dokploy-redis replicated 1/1 redis:7

root@server ~# docker service logs dokploy dokploy.1.35f2o4uagg52@srv919152 | dokploy.1.35f2o4uagg52@srv919152 | > [email protected] start /app dokploy.1.35f2o4uagg52@srv919152 | > node -r dotenv/config dist/server.mjs dokploy.1.35f2o4uagg52@srv919152 | dokploy.1.35f2o4uagg52@srv919152 | Default middlewares already exists dokploy.1.35f2o4uagg52@srv919152 | Network is already initilized dokploy.1.35f2o4uagg52@srv919152 | Main config already exists dokploy.1.35f2o4uagg52@srv919152 | Default traefik config already exists dokploy.1.35f2o4uagg52@srv919152 | { dokploy.1.35f2o4uagg52@srv919152 | severity_local: 'NOTICE', dokploy.1.35f2o4uagg52@srv919152 | severity: 'NOTICE', dokploy.1.35f2o4uagg52@srv919152 | code: '42P06', dokploy.1.35f2o4uagg52@srv919152 | message: 'schema "drizzle" already exists, skipping', dokploy.1.35f2o4uagg52@srv919152 | file: 'schemacmds.c', dokploy.1.35f2o4uagg52@srv919152 | line: '132', dokploy.1.35f2o4uagg52@srv919152 | routine: 'CreateSchemaCommand' dokploy.1.35f2o4uagg52@srv919152 | } dokploy.1.35f2o4uagg52@srv919152 | { dokploy.1.35f2o4uagg52@srv919152 | severity_local: 'NOTICE', dokploy.1.35f2o4uagg52@srv919152 | severity: 'NOTICE', dokploy.1.35f2o4uagg52@srv919152 | code: '42P07', dokploy.1.35f2o4uagg52@srv919152 | message: 'relation "__drizzle_migrations" already exists, skipping', dokploy.1.35f2o4uagg52@srv919152 | file: 'parse_utilcmd.c', dokploy.1.35f2o4uagg52@srv919152 | line: '207', dokploy.1.35f2o4uagg52@srv919152 | routine: 'transformCreateStmt' dokploy.1.35f2o4uagg52@srv919152 | } dokploy.1.35f2o4uagg52@srv919152 | Migration complete dokploy.1.35f2o4uagg52@srv919152 | Setting up cron jobs.... dokploy.1.35f2o4uagg52@srv919152 | [Backup] web-server Enabled with cron: [0 0 * * *] dokploy.1.35f2o4uagg52@srv919152 | [Backup] postgres Enabled with cron: [0 0 * * *] dokploy.1.35f2o4uagg52@srv919152 | [Backup] postgres Enabled with cron: [0 0 * * *] dokploy.1.35f2o4uagg52@srv919152 | Starting log requests cleanup 0 0 * * * dokploy.1.35f2o4uagg52@srv919152 | Initializing 0 schedules dokploy.1.35f2o4uagg52@srv919152 | Setting up volume backups cron jobs.... dokploy.1.35f2o4uagg52@srv919152 | Initializing 0 volume backups dokploy.1.35f2o4uagg52@srv919152 | Server Started on: http://0.0.0.0:3000 dokploy.1.35f2o4uagg52@srv919152 | Starting Deployment Worker dokploy.1.35f2o4uagg52@srv919152 |  ELIFECYCLE  Command failed. dokploy.1.x5giix5b03ow@srv919152 | dokploy.1.x5giix5b03ow@srv919152 | > [email protected] start /app dokploy.1.x5giix5b03ow@srv919152 | > node -r dotenv/config dist/server.mjs dokploy.1.x5giix5b03ow@srv919152 | dokploy.1.x5giix5b03ow@srv919152 | Default middlewares already exists dokploy.1.x5giix5b03ow@srv919152 | Network is already initilized dokploy.1.x5giix5b03ow@srv919152 | Main config already exists dokploy.1.x5giix5b03ow@srv919152 | Default traefik config already exists dokploy.1.x5giix5b03ow@srv919152 | { dokploy.1.x5giix5b03ow@srv919152 | severity_local: 'NOTICE', dokploy.1.x5giix5b03ow@srv919152 | severity: 'NOTICE', dokploy.1.x5giix5b03ow@srv919152 | code: '42P06', dokploy.1.x5giix5b03ow@srv919152 | message: 'schema "drizzle" already exists, skipping', dokploy.1.x5giix5b03ow@srv919152 | file: 'schemacmds.c', dokploy.1.x5giix5b03ow@srv919152 | line: '132', dokploy.1.x5giix5b03ow@srv919152 | routine: 'CreateSchemaCommand' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 | { dokploy.1.x5giix5b03ow@srv919152 | severity_local: 'NOTICE', dokploy.1.x5giix5b03ow@srv919152 | severity: 'NOTICE', dokploy.1.x5giix5b03ow@srv919152 | code: '42P07', dokploy.1.x5giix5b03ow@srv919152 | message: 'relation "__drizzle_migrations" already exists, skipping', dokploy.1.x5giix5b03ow@srv919152 | file: 'parse_utilcmd.c', dokploy.1.x5giix5b03ow@srv919152 | line: '207', dokploy.1.x5giix5b03ow@srv919152 | routine: 'transformCreateStmt' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 | Migration complete dokploy.1.x5giix5b03ow@srv919152 | Setting up cron jobs.... dokploy.1.x5giix5b03ow@srv919152 | [Backup] web-server Enabled with cron: [0 0 * * *] dokploy.1.x5giix5b03ow@srv919152 | [Backup] postgres Enabled with cron: [0 0 * * *] dokploy.1.x5giix5b03ow@srv919152 | [Backup] postgres Enabled with cron: [0 0 * * *] dokploy.1.x5giix5b03ow@srv919152 | Starting log requests cleanup 0 0 * * * dokploy.1.x5giix5b03ow@srv919152 | Initializing 0 schedules dokploy.1.x5giix5b03ow@srv919152 | Setting up volume backups cron jobs.... dokploy.1.x5giix5b03ow@srv919152 | Initializing 0 volume backups dokploy.1.x5giix5b03ow@srv919152 | Server Started on: http://0.0.0.0:3000 dokploy.1.x5giix5b03ow@srv919152 | Starting Deployment Worker dokploy.1.x5giix5b03ow@srv919152 | { dokploy.1.x5giix5b03ow@srv919152 | severity_local: 'WARNING', dokploy.1.x5giix5b03ow@srv919152 | severity: 'WARNING', dokploy.1.x5giix5b03ow@srv919152 | code: '57P01', dokploy.1.x5giix5b03ow@srv919152 | message: 'terminating connection due to immediate shutdown command', dokploy.1.x5giix5b03ow@srv919152 | file: 'postgres.c', dokploy.1.x5giix5b03ow@srv919152 | line: '2974', dokploy.1.x5giix5b03ow@srv919152 | routine: 'quickdie' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 | ⨯ [Error: getaddrinfo ENOTFOUND dokploy-postgres] { dokploy.1.x5giix5b03ow@srv919152 | errno: -3008, dokploy.1.x5giix5b03ow@srv919152 | code: 'ENOTFOUND', dokploy.1.x5giix5b03ow@srv919152 | syscall: 'getaddrinfo', dokploy.1.x5giix5b03ow@srv919152 | hostname: 'dokploy-postgres' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 | ⨯ [Error: getaddrinfo ENOTFOUND dokploy-postgres] { dokploy.1.x5giix5b03ow@srv919152 | errno: -3008, dokploy.1.x5giix5b03ow@srv919152 | code: 'ENOTFOUND', dokploy.1.x5giix5b03ow@srv919152 | syscall: 'getaddrinfo', dokploy.1.x5giix5b03ow@srv919152 | hostname: 'dokploy-postgres' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 | ⨯ [Error: getaddrinfo ENOTFOUND dokploy-postgres] { dokploy.1.x5giix5b03ow@srv919152 | errno: -3008, dokploy.1.x5giix5b03ow@srv919152 | code: 'ENOTFOUND', dokploy.1.x5giix5b03ow@srv919152 | syscall: 'getaddrinfo', dokploy.1.x5giix5b03ow@srv919152 | hostname: 'dokploy-postgres' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 | ⨯ [Error: getaddrinfo ENOTFOUND dokploy-postgres] { dokploy.1.x5giix5b03ow@srv919152 | errno: -3008, dokploy.1.x5giix5b03ow@srv919152 | code: 'ENOTFOUND', dokploy.1.x5giix5b03ow@srv919152 | syscall: 'getaddrinfo', dokploy.1.x5giix5b03ow@srv919152 | hostname: 'dokploy-postgres' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 | ⨯ [Error: getaddrinfo ENOTFOUND dokploy-postgres] { dokploy.1.x5giix5b03ow@srv919152 | errno: -3008, dokploy.1.x5giix5b03ow@srv919152 | code: 'ENOTFOUND', dokploy.1.x5giix5b03ow@srv919152 | syscall: 'getaddrinfo', dokploy.1.x5giix5b03ow@srv919152 | hostname: 'dokploy-postgres' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 | ⨯ [Error: getaddrinfo ENOTFOUND dokploy-postgres] { dokploy.1.x5giix5b03ow@srv919152 | errno: -3008, dokploy.1.x5giix5b03ow@srv919152 | code: 'ENOTFOUND', dokploy.1.x5giix5b03ow@srv919152 | syscall: 'getaddrinfo', dokploy.1.x5giix5b03ow@srv919152 | hostname: 'dokploy-postgres' dokploy.1.x5giix5b03ow@srv919152 | } dokploy.1.x5giix5b03ow@srv919152 |  ELIFECYCLE  Command failed. dokploy.1.gnkb7t4ngsq9@srv919152 | dokploy.1.gnkb7t4ngsq9@srv919152 | > [email protected] start /app dokploy.1.gnkb7t4ngsq9@srv919152 | > node -r dotenv/config dist/server.mjs dokploy.1.gnkb7t4ngsq9@srv919152 | dokploy.1.gnkb7t4ngsq9@srv919152 | Default middlewares already exists dokploy.1.gnkb7t4ngsq9@srv919152 | Network is already initilized dokploy.1.gnkb7t4ngsq9@srv919152 | Main config already exists dokploy.1.gnkb7t4ngsq9@srv919152 | Default traefik config already exists dokploy.1.gnkb7t4ngsq9@srv919152 | Migration failed [Error: getaddrinfo ENOTFOUND dokploy-postgres] { dokploy.1.gnkb7t4ngsq9@srv919152 | errno: -3008, dokploy.1.gnkb7t4ngsq9@srv919152 | code: 'ENOTFOUND', dokploy.1.gnkb7t4ngsq9@srv919152 | syscall: 'getaddrinfo', dokploy.1.gnkb7t4ngsq9@srv919152 | hostname: 'dokploy-postgres' dokploy.1.gnkb7t4ngsq9@srv919152 | } dokploy.1.gnkb7t4ngsq9@srv919152 | Setting up cron jobs.... dokploy.1.gnkb7t4ngsq9@srv919152 | Main Server Error [Error: getaddrinfo ENOTFOUND dokploy-postgres] { dokploy.1.gnkb7t4ngsq9@srv919152 | errno: -3008, dokploy.1.gnkb7t4ngsq9@srv919152 | code: 'ENOTFOUND', dokploy.1.gnkb7t4ngsq9@srv919152 | syscall: 'getaddrinfo', dokploy.1.gnkb7t4ngsq9@srv919152 | hostname: 'dokploy-postgres' dokploy.1.gnkb7t4ngsq9@srv919152 | } dokploy.1.g9gvdj13eo8l@srv919152 | dokploy.1.g9gvdj13eo8l@srv919152 | > [email protected] start /app dokploy.1.g9gvdj13eo8l@srv919152 | > node -r dotenv/config dist/server.mjs dokploy.1.g9gvdj13eo8l@srv919152 | dokploy.1.g9gvdj13eo8l@srv919152 | Default middlewares already exists dokploy.1.g9gvdj13eo8l@srv919152 | Network is already initilized dokploy.1.g9gvdj13eo8l@srv919152 | Main config already exists dokploy.1.g9gvdj13eo8l@srv919152 | Default traefik config already exists dokploy.1.g9gvdj13eo8l@srv919152 | { dokploy.1.g9gvdj13eo8l@srv919152 | severity_local: 'NOTICE', dokploy.1.g9gvdj13eo8l@srv919152 | severity: 'NOTICE', dokploy.1.g9gvdj13eo8l@srv919152 | code: '42P06', dokploy.1.g9gvdj13eo8l@srv919152 | message: 'schema "drizzle" already exists, skipping', dokploy.1.g9gvdj13eo8l@srv919152 | file: 'schemacmds.c', dokploy.1.g9gvdj13eo8l@srv919152 | line: '132', dokploy.1.g9gvdj13eo8l@srv919152 | routine: 'CreateSchemaCommand' dokploy.1.g9gvdj13eo8l@srv919152 | } dokploy.1.g9gvdj13eo8l@srv919152 | { dokploy.1.g9gvdj13eo8l@srv919152 | severity_local: 'NOTICE', dokploy.1.g9gvdj13eo8l@srv919152 | severity: 'NOTICE', dokploy.1.g9gvdj13eo8l@srv919152 | code: '42P07', dokploy.1.g9gvdj13eo8l@srv919152 | message: 'relation "__drizzle_migrations" already exists, skipping', dokploy.1.g9gvdj13eo8l@srv919152 | file: 'parse_utilcmd.c', dokploy.1.g9gvdj13eo8l@srv919152 | line: '207', dokploy.1.g9gvdj13eo8l@srv919152 | routine: 'transformCreateStmt' dokploy.1.g9gvdj13eo8l@srv919152 | } dokploy.1.g9gvdj13eo8l@srv919152 | Migration complete dokploy.1.g9gvdj13eo8l@srv919152 | Setting up cron jobs.... dokploy.1.g9gvdj13eo8l@srv919152 | [Backup] web-server Enabled with cron: [0 0 * * *] dokploy.1.g9gvdj13eo8l@srv919152 | [Backup] postgres Enabled with cron: [0 0 * * *] dokploy.1.g9gvdj13eo8l@srv919152 | [Backup] postgres Enabled with cron: [0 0 * * *] dokploy.1.g9gvdj13eo8l@srv919152 | Starting log requests cleanup 0 0 * * * dokploy.1.g9gvdj13eo8l@srv919152 | Initializing 0 schedules dokploy.1.g9gvdj13eo8l@srv919152 | Setting up volume backups cron jobs.... dokploy.1.g9gvdj13eo8l@srv919152 | Initializing 0 volume backups dokploy.1.g9gvdj13eo8l@srv919152 | Server Started on: http://0.0.0.0:3000 dokploy.1.g9gvdj13eo8l@srv919152 | Starting Deployment Worker dokploy.1.g9gvdj13eo8l@srv919152 |  ELIFECYCLE  Command failed.

root@server ~# docker service logs dokploy-postgres dokploy-postgres.1.0nyjxyctt5p7@srv919152 | dokploy-postgres.1.0nyjxyctt5p7@srv919152 | PostgreSQL Database directory appears to contain a database; Skipping initialization dokploy-postgres.1.0nyjxyctt5p7@srv919152 | dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.394 UTC [1] LOG: starting PostgreSQL 16.9 (Debian 16.9-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.394 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.394 UTC [1] LOG: listening on IPv6 address "::", port 5432 dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.398 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.407 UTC [28] LOG: database system was shut down at 2025-08-22 00:35:17 UTC dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.407 UTC [28] LOG: record with incorrect prev-link 2D200420/18C800 at 0/1EF6770 dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.407 UTC [28] LOG: invalid checkpoint record dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.407 UTC [28] PANIC: could not locate a valid checkpoint record dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.541 UTC [1] LOG: startup process (PID 28) was terminated by signal 6: Aborted dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.541 UTC [1] LOG: aborting startup due to startup process failure dokploy-postgres.1.0nyjxyctt5p7@srv919152 | 2025-08-22 11:11:29.542 UTC [1] LOG: database system is shut down dokploy-postgres.1.k5ivykpkmerb@srv919152 | dokploy-postgres.1.k5ivykpkmerb@srv919152 | PostgreSQL Database directory appears to contain a database; Skipping initialization dokploy-postgres.1.k5ivykpkmerb@srv919152 | dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.846 UTC [1] LOG: starting PostgreSQL 16.9 (Debian 16.9-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.846 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.846 UTC [1] LOG: listening on IPv6 address "::", port 5432 dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.848 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.853 UTC [28] LOG: database system was shut down at 2025-08-22 00:35:17 UTC dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.853 UTC [28] LOG: record with incorrect prev-link 2D200420/18C800 at 0/1EF6770 dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.853 UTC [28] LOG: invalid checkpoint record dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.853 UTC [28] PANIC: could not locate a valid checkpoint record dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.971 UTC [1] LOG: startup process (PID 28) was terminated by signal 6: Aborted dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.971 UTC [1] LOG: aborting startup due to startup process failure dokploy-postgres.1.k5ivykpkmerb@srv919152 | 2025-08-22 11:11:17.972 UTC [1] LOG: database system is shut down dokploy-postgres.1.wunkzsqww70f@srv919152 | dokploy-postgres.1.wunkzsqww70f@srv919152 | PostgreSQL Database directory appears to contain a database; Skipping initialization dokploy-postgres.1.wunkzsqww70f@srv919152 | dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.048 UTC [1] LOG: starting PostgreSQL 16.9 (Debian 16.9-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.048 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.048 UTC [1] LOG: listening on IPv6 address "::", port 5432 dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.054 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.063 UTC [28] LOG: database system was shut down at 2025-08-22 00:35:17 UTC dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.063 UTC [28] LOG: record with incorrect prev-link 2D200420/18C800 at 0/1EF6770 dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.063 UTC [28] LOG: invalid checkpoint record dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.063 UTC [28] PANIC: could not locate a valid checkpoint record dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.186 UTC [1] LOG: startup process (PID 28) was terminated by signal 6: Aborted dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.186 UTC [1] LOG: aborting startup due to startup process failure dokploy-postgres.1.wunkzsqww70f@srv919152 | 2025-08-22 11:11:12.187 UTC [1] LOG: database system is shut down dokploy-postgres.1.v7gpz77ud2ut@srv919152 | dokploy-postgres.1.v7gpz77ud2ut@srv919152 | PostgreSQL Database directory appears to contain a database; Skipping initialization dokploy-postgres.1.v7gpz77ud2ut@srv919152 | dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.586 UTC [1] LOG: starting PostgreSQL 16.9 (Debian 16.9-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.586 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.586 UTC [1] LOG: listening on IPv6 address "::", port 5432 dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.589 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.595 UTC [28] LOG: database system was shut down at 2025-08-22 00:35:17 UTC dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.595 UTC [28] LOG: record with incorrect prev-link 2D200420/18C800 at 0/1EF6770 dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.595 UTC [28] LOG: invalid checkpoint record dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.595 UTC [28] PANIC: could not locate a valid checkpoint record dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.727 UTC [1] LOG: startup process (PID 28) was terminated by signal 6: Aborted dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.727 UTC [1] LOG: aborting startup due to startup process failure dokploy-postgres.1.v7gpz77ud2ut@srv919152 | 2025-08-22 11:11:23.728 UTC [1] LOG: database system is shut down

Will you send a PR to fix it?

No

NemurenaiDev avatar Aug 22 '25 11:08 NemurenaiDev

When you refer to N restarts, do you mean that you will manually restart the server N number of times?

Although dokploy should not be restarted unless you update to a new version

Siumauricio avatar Aug 24 '25 05:08 Siumauricio

Hello, I am experiencing the (same?) issue. I can reproduce the same issue on v0.24.12:

  • create a new PostgreSQL database (v15 or v17, I tried both)
  • open it to the outside
  • try to connect with the wrong password (for instance)
  • this will generate an error and trigger a container restart
  • it seems like the default restart policy makes the restarts so fast that the db can't initialize
  • after a few mins, I encounter a volume corruption and I have to manually reset the PG Wall

That's something a bit strange but it seems like any error (for instance FATAL: wrong password...), will trigger a container restart (by Swarm?), while it shouldn't.

Androz2091 avatar Aug 25 '25 06:08 Androz2091

When you refer to N restarts, do you mean that you will manually restart the server N number of times?

Although dokploy should not be restarted unless you update to a new version

Ofc not, i sent you a restart notifications and each of them dokploy did on its own, i havent restarted it by hands at all. After each restart there was error "ELIFECYCLE  Command failed." in logs

NemurenaiDev avatar Aug 25 '25 08:08 NemurenaiDev

Just checked my dokploy instance and its dead again.

Hello, I am experiencing the (same?) issue. I can reproduce the same issue on v0.24.12:

* create a new PostgreSQL database (v15 or v17, I tried both)

* open it to the outside

* try to connect with the wrong password (for instance)

* this will generate an error and trigger a container restart

* it seems like the default restart policy makes the restarts so fast that the db can't initialize

* after a few mins, I encounter a volume corruption and I have to manually reset the PG Wall

That's something a bit strange but it seems like any error (for instance FATAL: wrong password...), will trigger a container restart (by Swarm?), while it shouldn't.

Will try adjusting restart policy, great idea, thanks.

Also, is there a way to enable full logs for dokploy container, not just general "ELIFECYCLE  Command failed"? Want to understand why exactly it crashes and provide a better logs.

NemurenaiDev avatar Aug 25 '25 08:08 NemurenaiDev

During last month i have tried:

docker service create
--name dokploy-postgres
--constraint 'node.role==manager'
--network dokploy-network
--env POSTGRES_USER=dokploy
--env POSTGRES_DB=dokploy
--env POSTGRES_PASSWORD=pass-hidden
--mount type=volume,source=dokploy-postgres-database,target=/var/lib/postgresql/data
--restart-condition on-failure
--restart-max-attempts 5
--restart-delay 30s
postgres:16

and

docker service create
--name dokploy-postgres
--constraint 'node.role==manager'
--network dokploy-network
--env POSTGRES_USER=dokploy
--env POSTGRES_DB=dokploy
--env POSTGRES_PASSWORD=pass-hidden
--mount type=volume,source=dokploy-postgres-database,target=/var/lib/postgresql/data
--restart-condition on-failure
--restart-max-attempts 5
--restart-delay 30s
--health-cmd "pg_isready -U dokploy || exit 1"
--health-interval 10s
--health-retries 5
--health-timeout 5s
--health-start-period 10s
postgres:16

but neither of them helped with the issue. Postgres kept restarting along with Dokploy itself anyway. I restored database from backup several times, but since 17.09.2025 Dokploy has not restarted once. I assume the crash issue was fixed in one of the updates, but the issue of Postgres restarting whenever Dokploy does - without any delay - still persists.

NemurenaiDev avatar Sep 29 '25 08:09 NemurenaiDev

I have been having a similar issue, sometimes when the postgres instance is being rebooted or reloaded, it starts throwing the error below

PostgreSQL Database directory appears to contain a database; Skipping initialization 2025-10-15 12:31:51.139 UTC [1] LOG: starting PostgreSQL 16.10 (Debian 16.10-1.pgdg13+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 14.2.0-19) 14.2.0, 64-bit 2025-10-15 12:31:51.140 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2025-10-15 12:31:51.140 UTC [1] LOG: listening on IPv6 address "::", port 5432 2025-10-15 12:31:51.142 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2025-10-15 12:31:51.147 UTC [29] LOG: database system was shut down at 2025-10-15 12:29:51 UTC 2025-10-15 12:31:51.147 UTC [29] LOG: record with incorrect prev-link C2200420/69001 at 0/205EC98 2025-10-15 12:31:51.147 UTC [29] LOG: invalid checkpoint record 2025-10-15 12:31:51.147 UTC [29] PANIC: could not locate a valid checkpoint record 2025-10-15 12:31:51.271 UTC [1] LOG: startup process (PID 29) was terminated by signal 6: Aborted 2025-10-15 12:31:51.271 UTC [1] LOG: aborting startup due to startup process failure 2025-10-15 12:31:51.273 UTC [1] LOG: database system is shut down

I'm on dokploy version 0.24.8, and the postgres image is postgres:16

I found that the fastest way to recreate this issue, is to open the external port and then remove the port and save again.

is there a way to add a safe shutdown/restart

EDIT: Just to add clarity, I am talking about starting a new postgres DB instance and not the dokploy-postgres that runs with dokploy

KeenanFernandes2000 avatar Oct 15 '25 15:10 KeenanFernandes2000

had the same issue postgress database was corrupted by dockploy turning itself off or restarting somehow. But could fix the problem by pg_resetwal, which rebuilt the write-ahead log. Didnt have a backup so hope this helps someone out there!

WijMakenSitesSites avatar Nov 04 '25 13:11 WijMakenSitesSites

I'm on dokploy version 0.24.8, and the postgres image is postgres:16 I found that the fastest way to recreate this issue, is to open the external port and then remove the port and save again. is there a way to add a safe shutdown/restart

Yes, I can reproduce with this!

Androz2091 avatar Nov 05 '25 15:11 Androz2091

Same here

I'm on dokploy version 0.24.8, and the postgres image is postgres:16 I found that the fastest way to recreate this issue, is to open the external port and then remove the port and save again. is there a way to add a safe shutdown/restart

Yes, I can reproduce with this!

Same here.

axadrn avatar Nov 06 '25 15:11 axadrn

Go the same problem yesterday when opening and removing external port

NoeGrangier avatar Nov 19 '25 14:11 NoeGrangier

had the same issue postgress database was corrupted by dockploy turning itself off or restarting somehow. But could fix the problem by pg_resetwal, which rebuilt the write-ahead log. Didnt have a backup so hope this helps someone out there!

Mind to add a steps or command so people can see a solution?

Siumauricio avatar Nov 26 '25 07:11 Siumauricio

@Siumauricio, I discovered the root cause of PostgreSQL data loss when changing ports in Dokploy. The issue is a volume mount path mismatch. Dokploy mounts the volume to /var/lib/postgresql/18/data, but PostgreSQL 18 actually stores its data in /var/lib/postgresql/18/docker. Because of this, the real data is never persisted to the volume. When the container restarts after a port change, PostgreSQL sees an empty directory and initialises a fresh database, wiping all existing data. The fix is to change the volume mount path to /var/lib/postgresql/18/docker, or set the PGDATA environment variable to /var/lib/postgresql/18/data so PostgreSQL uses the mounted location. Dokploy should either auto-detect the correct data directory or explicitly set PGDATA to match the configured mount path. ( ps: I am using PostgreSQL 18)

sammychinedu2ky avatar Dec 03 '25 17:12 sammychinedu2ky

https://hub.docker.com/_/postgres#pgdata https://github.com/Dokploy/dokploy/blob/6cafb15dbb93cb52555ea05bc1347c130b9770ec/packages/server/src/services/postgres.ts#L19-L25

sammychinedu2ky avatar Dec 03 '25 17:12 sammychinedu2ky

@sammychinedu2ky does this also apply to older PostgreSQL version? I think that the author of this post and myself are using <18. Thanks and congrats, hopefully your findings fix the issue for everyone else.

Androz2091 avatar Dec 03 '25 23:12 Androz2091

@Androz2091, I tested v15 and couldn't replicate the issue. Can you confirm if you adjusted your mounted volume?

sammychinedu2ky avatar Dec 04 '25 01:12 sammychinedu2ky

forgot to mention I am running on dokploy Version v0.25.11

sammychinedu2ky avatar Dec 04 '25 09:12 sammychinedu2ky