pgbackrest
pgbackrest copied to clipboard
pgbackrest 2.30 incremental backup aborted with exception [025]
Please provide the following information when submitting an issue (feature requests or general comments can skip this):
- pgBackRest version:
2.30
- PostgreSQL version:
postgres=# select version();
version
----------------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 12.4 (EnterpriseDB Advanced Server 12.4.5) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5), 64-bit
(1 row)
- Operating system/version - if you have more than one server (for example, a database server, a repository host server, one or more standbys), please specify each:
Red Hat Enterprise Linux release 8.2 (Ootpa)
- Did you install pgBackRest from source or from a package?
package - pgbackrest-2.30-4.rhel8.x86_64
- Please attach the following as applicable:
pgbackrest.conffile(s)
[global]
repo1-path=/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01
repo1-retention-full=1
log-level-console=info
log-level-file=info
archive-async=y
spool-path=/var/spool/pgbackrest
delta=y
archive-timeout=1800
start-fast=y
[global:archive-get]
process-max=2
[global:archive-push]
process-max=2
#[main]
#pg1-path=/var/lib/pgsql/10/data
[prodbackup02]
pg1-path=/dbpool/edb/as10/data
pg1-port=5444
pg1-user=enterprisedb
[standby]
pg1-path=/dbpool/edb/as10/data
pg1-port=5444
pg1-user=enterprisedb
[prodbackup01]
pg1-path=/dbpool/edb/as12/data
pg1-port=5444
pg1-user=enterprisedb
- `postgresql.conf` settings applicable to pgBackRest (`archive_command`, `archive_mode`, `listen_addresses`, `max_wal_senders`, `wal_level`, `port`)
postgres=# select name, setting from pg_settings where name in
postgres-# ('archive_command', 'archive_mode', 'listen_addresses', 'max_wal_senders', 'wal_level', 'port');
name | setting
------------------+--------------------------------------------------
archive_command | pgbackrest --stanza=prodbackup01 archive-push %p
archive_mode | on
listen_addresses | *
max_wal_senders | 10
port | 5444
wal_level | replica
(6 rows)
postgres=#
- errors in the postgresql log file before or during the time you experienced the issue
No postgres errors reported before or during
- log file in `/var/log/pgbackrest` for the commands run (e.g. `/var/log/pgbackrest/mystanza_backup.log`)
2020-11-17 18:00:01.909 P00 INFO: backup command begin 2.30: --archive-timeout=1800 --delta --log-level-console=info --log-level-file=info --log-path=/var/lib/edb/log/pgbackrest --pg1-path=/dbpool/edb/as12/data --pg1-port=5444 --pg1-user=enterprisedb --process-max=16 --repo1-path=/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01 --repo1-retention-full=1 --stanza=prodbackup01 --start-fast --type=incr
2020-11-17 18:00:03.754 P00 INFO: last backup label = 20201114-180002F_20201116-180002I, version = 2.30
2020-11-17 18:00:03.754 P00 INFO: execute non-exclusive pg_start_backup(): backup begins after the requested immediate checkpoint completes
2020-11-17 18:00:05.256 P00 INFO: backup start archive = 0000000100023F82000000F2, lsn = 23F82/F20000D8
2020-11-17 18:01:02.243 P16 INFO: backup file /dbpool/edb/as12/data/pg_tblspc/16426/PG_12_201909212/16429/684396733.92 (1GB, 1%) checksum 616d06665515394669ef62551c974dd491085b6a
2020-11-17 18:01:02.670 P06 INFO: backup file /dbpool/edb/as12/data/pg_tblspc/16426/PG_12_201909212/16429/684396733.97 (1GB, 1%) checksum 739c2d4c34a2e9eb4fe4983d5e44aad64499f774
2020-11-17 18:01:03.252 P14 INFO: backup file /dbpool/edb/as12/data/pg_tblspc/16426/PG_12_201909212/16429/684396733.9 (1GB, 1%) checksum ca9441877a839075b1993b781490c217076544ff
2020-11-17 18:01:04.012 P04 INFO: backup file /dbpool/edb/as12/data/pg_tblspc/16426/PG_12_201909212/16429/684396733.98 (1GB, 1%) checksum 2343e1c9d4cdf2dd92140b48c048eea0df0a2f70
2020-11-17 18:01:06.620 P12 INFO: backup file /dbpool/edb/as12/data/pg_tblspc/16426/PG_12_201909212/16429/684396733.94 (1GB, 1%) checksum ac8b904781307bb301eb748f1fac924190a1b1ab
2020-11-17 18:01:06.620 P08 INFO: backup file /dbpool/edb/as12/data/pg_tblspc/16426/PG_12_201909212/16429/684396733.91 (1GB, 1%) checksum cd332239644312d85a1baf5424cd1684a6c29a40
2020-11-17 18:01:08.965 P02 INFO: backup file /dbpool/edb/as12/data/pg_tblspc/16426/PG_12_201909212/16429/684396733.99 (1GB, 1%) checksum 44391aaf8045251a628fb73153c2bf4cc84a4dde
.
.
.
2020-11-17 19:56:17.001 P01 INFO: backup file /dbpool/edb/as12/data/base/16429/4287329715 (0B, 99%)
2020-11-17 19:56:17.001 P02 INFO: backup file /dbpool/edb/as12/data/base/16429/4287298935 (0B, 99%)
2020-11-17 19:56:17.088 P00 INFO: incr backup size = 6068.2GB
2020-11-17 19:56:17.088 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
ASSERT: [025]: check 'cancel != NULL' failed
--------------------------------------------------------------------
If SUBMITTING AN ISSUE please provide the following information:
version: 2.30
command: backup
options: --archive-timeout=1800 --delta --log-level-console=info --log-level-file=info --log-path=/var/lib/edb/log/pgbackrest --pg1-path=/dbpool/edb/as12/data --pg1-port=5444 --pg1-user=enterprisedb --process-max=16 --repo1-path=/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01 --repo1-retention-full=1 --stanza=prodbackup01 --start-fast --type=incr
stack trace:
postgres/client:pgClientQuery:206:(debug log level required for parameters)
db/db:dbQuery:(debug log level required for parameters)
db/db:dbQueryRow:(debug log level required for parameters)
db/db:dbBackupStop:(debug log level required for parameters)
command/backup/backup:backupStop:(debug log level required for parameters)
command/backup/backup:cmdBackup:(debug log level required for parameters)
main:main:(debug log level required for parameters)
--------------------------------------------------------------------
2020-11-17 20:26:17.166 P00 INFO: backup command end: aborted with exception [025]
- Describe the issue:
Incremental backup failed/aborted with exception [025]
Backup error/exception:
2020-11-17 19:56:17.001 P01 INFO: backup file /dbpool/edb/as12/data/base/16429/4287329715 (0B, 99%)
2020-11-17 19:56:17.001 P02 INFO: backup file /dbpool/edb/as12/data/base/16429/4287298935 (0B, 99%)
2020-11-17 19:56:17.088 P00 INFO: incr backup size = 6068.2GB
2020-11-17 19:56:17.088 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
ASSERT: [025]: check 'cancel != NULL' failed
--------------------------------------------------------------------
If SUBMITTING AN ISSUE please provide the following information:
version: 2.30
command: backup
options: --archive-timeout=1800 --delta --log-level-console=info --log-level-file=info --log-path=/var/lib/edb/log/pgbackrest --pg1-path=/dbpool/edb/as12/data --pg1-port=5444 --pg1-user=enterprisedb --process-max=16 --repo1-path=/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01 --repo1-retention-full=1 --stanza=prodbackup01 --start-fast --type=incr
stack trace:
postgres/client:pgClientQuery:206:(debug log level required for parameters)
db/db:dbQuery:(debug log level required for parameters)
db/db:dbQueryRow:(debug log level required for parameters)
db/db:dbBackupStop:(debug log level required for parameters)
command/backup/backup:backupStop:(debug log level required for parameters)
command/backup/backup:cmdBackup:(debug log level required for parameters)
main:main:(debug log level required for parameters)
--------------------------------------------------------------------
2020-11-17 20:26:17.166 P00 INFO: backup command end: aborted with exception [025]
Backup Info:
stanza: prodbackup01
status: ok
cipher: none
db (current)
wal archive min/max (12-1): 0000000100023DF0000000D6/0000000100023F90000000B4
full backup: 20201114-180002F
timestamp start/stop: 2020-11-14 18:00:02 / 2020-11-14 23:28:37
wal start/stop: 0000000100023DF0000000D6 / 0000000100023DFE000000D8
database size: 5888.6GB, backup size: 5888.6GB
repository size: 1515.2GB, repository backup size: 1515.2GB
incr backup: 20201114-180002F_20201115-180002I
timestamp start/stop: 2020-11-15 18:00:02 / 2020-11-15 19:21:31
wal start/stop: 0000000100023E4D0000001A / 0000000100023E5900000060
database size: 6052.6GB, backup size: 988.8GB
repository size: 1539.5GB, repository backup size: 191.6GB
backup reference list: 20201114-180002F
incr backup: 20201114-180002F_20201116-180002I
timestamp start/stop: 2020-11-16 18:00:02 / 2020-11-16 20:31:42
wal start/stop: 0000000100023EF3000000C3 / 0000000100023F17000000C1
database size: 6047.2GB, backup size: 2192.7GB
repository size: 1537.6GB, repository backup size: 512.2GB
backup reference list: 20201114-180002F, 20201114-180002F_20201115-180002I
Backup Configuration/Check:
2020-11-17 20:26:18.344 P00 INFO: check command begin 2.30: --archive-timeout=1800 --log-level-console=info --log-level-file=info --pg1-path=/dbpool/edb/as12/data --pg1-port=5444 --pg1-user=enterprisedb --repo1-path=/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01 --stanza=prodbackup01
2020-11-17 20:26:19.967 P00 INFO: WAL segment 0000000100023F90000000B5 successfully archived to '/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/archive/prodbackup01/12-1/0000000100023F90/0000000100023F90000000B5-ee24f2bd68b2b159a2b60aa6c07de03d2982b13d.gz'
2020-11-17 20:26:19.967 P00 INFO: check command end: completed successfully (1623ms)
It looks like the pg_stop_backup() function did not return within db-timeout (default 30 minutes) and when pgBackRest tried to cancel it the connection had been dropped. We need to work on that error message, though.
On PG12 pg_stop_backup() should always return immediately, so not sure why that didn't happen. PG12 minor releases just came out -- any chance this cluster was updating (and restarted) during the backup. We just saw another similar issue where an update was the cause.
If that's the case this should be a one-time issue. Is it still happening?
Ran full backup with --log-level-console=debug --log-level-file=debug and it completed successfully. Below is a section of the log where the pg_stop_backup() function was called. Will schedule another incremental backup and will close the issue if it also completes successfully:
2020-11-18 23:29:02.482 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2020-11-18 23:29:02.482 P00 DEBUG: db/db::dbBackupStop: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null})
2020-11-18 23:29:02.482 P00 DEBUG: db/db::dbQueryRow: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null}, query: {"select lsn::text as lsn,
pg_catalog.pg_walfile_name(lsn)::text as wal_segment_name,
labelfile::text as backuplabel_file,
spcmapfile::text as tablespacemap_file
from pg_catalog.pg_stop_backup(false, false)"})
2020-11-18 23:29:02.482 P00 DEBUG: db/db::dbQuery: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null}, query: {"select lsn::text as lsn,
pg_catalog.pg_walfile_name(lsn)::text as wal_segment_name,
labelfile::text as backuplabel_file,
spcmapfile::text as tablespacemap_file
from pg_catalog.pg_stop_backup(false, false)"})
2020-11-18 23:29:02.482 P00 DEBUG: postgres/client::pgClientQuery: (this: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, query: {"select lsn::text as lsn,
pg_catalog.pg_walfile_name(lsn)::text as wal_segment_name,
labelfile::text as backuplabel_file,
spcmapfile::text as tablespacemap_file
from pg_catalog.pg_stop_backup(false, false)"})
2020-11-18 23:29:02.682 P00 DEBUG: postgres/client::pgClientQuery: => {VariantList}
2020-11-18 23:29:02.682 P00 DEBUG: db/db::dbQuery: => {VariantList}
2020-11-18 23:29:02.682 P00 DEBUG: db/db::dbQueryRow: => {VariantList}
2020-11-18 23:29:02.682 P00 DEBUG: db/db::dbBackupStop: => {DbBackupStopResult}
2020-11-18 23:29:02.682 P00 DEBUG: command/backup/backup::backupTime: (backupData: {BackupData}, waitRemainder: false)
2020-11-18 23:29:02.682 P00 DEBUG: db/db::dbTimeMSec: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null})
2020-11-18 23:29:02.682 P00 DEBUG: db/db::dbQueryColumn: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null}, query: {"select (extract(epoch from clock_timestamp()) * 1000)::bigint"})
2020-11-18 23:29:02.682 P00 DEBUG: db/db::dbQuery: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null}, query: {"select (extract(epoch from clock_timestamp()) * 1000)::bigint"})
2020-11-18 23:29:02.682 P00 DEBUG: postgres/client::pgClientQuery: (this: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, query: {"select (extract(epoch from clock_timestamp()) * 1000)::bigint"})
2020-11-18 23:29:02.783 P00 DEBUG: postgres/client::pgClientQuery: => {VariantList}
2020-11-18 23:29:02.783 P00 DEBUG: db/db::dbQuery: => {VariantList}
2020-11-18 23:29:02.783 P00 DEBUG: db/db::dbQueryColumn: => {1605734942694}
2020-11-18 23:29:02.783 P00 DEBUG: db/db::dbTimeMSec: => 1605734942694
2020-11-18 23:29:02.783 P00 DEBUG: command/backup/backup::backupTime: => 1605734942
2020-11-18 23:29:02.783 P00 INFO: backup stop archive = 0000000100023FEF00000023, lsn = 23FEF/23178BE8
2020-11-18 23:29:02.783 P00 DEBUG: command/backup/backup::backupFilePut: (backupData: {BackupData}, manifest: {Manifest}, name: {"backup_label"}, timestamp: 1605734942, content: {"START WAL LOCATION: 23FE0/BD00CC40 (file 0000000100023FE0000000BD)
CHECKPOINT LOCATION: 23FE0/BD02B860
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2020-11-18 18:00:03 SAST
LABEL: pgBackRest backup started at 18-NOV-20 18:00:02.763554 +02:00
START TIMELINE: 1
"})
2020-11-18 23:29:02.783 P00 DEBUG: storage/storage::storageNewWrite: (this: {type: posix, path: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01"}, write: true}, fileExp: {"<REPO:BACKUP>/20201118-180002F/pg_data/backup_label.gz"}, param.modeFile: 0000, param.modePath: 0000, param.user: null, param.group: null, param.timeModified: 0, param.noCreatePath: false, param.noSyncFile: false, param.noSyncPath: false, param.noAtomic: false, param.compressible: true)
2020-11-18 23:29:02.783 P00 DEBUG: storage/storage::storageNewWrite: => {type: posix, name: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/backup/prodbackup01/20201118-180002F/pg_data/backup_label.gz"}, modeFile: 0640, modePath: 0750, createPath: true, syncFile: true, syncPath: true, atomic: true}
.
.
.
2020-11-18 23:29:02.788 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: => {64}
2020-11-18 23:29:02.788 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: (this: {inputSame: false, done: true}, filterType: {"hash"})
2020-11-18 23:29:02.788 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: => {"18fc422b054b027835c5e71b53be773ac1b24c11"}
2020-11-18 23:29:02.788 P00 DETAIL: wrote 'tablespace_map' file returned from pg_stop_backup()
2020-11-18 23:29:02.788 P00 DEBUG: command/backup/backup::backupFilePut: => void
2020-11-18 23:29:02.788 P00 DEBUG: command/backup/backup::backupStop: => {BackupStopResult}
2020-11-18 23:29:02.788 P00 DEBUG: info/manifest::manifestBuildComplete: (this: {Manifest}, timestampStart: 1605715202, lsnStart: {"23FE0/BD00CC40"}, archiveStart: {"0000000100023FE0000000BD"}, timestampStop: 1605734942, lsnStop: {"23FEF/23178BE8"}, archiveStop: {"0000000100023FEF00000023"}, pgId: 1, pgSystemId: 6881918819606537286, dbList: {VariantList}, optionArchiveCheck: true, optionArchiveCopy: false, optionBufferSize: 1048576, optionCompressLevel: 6, optionCompressLevelNetwork: 3, optionHardLink: false, optionProcessMax: 16, optionStandby: false)
2020-11-18 23:29:02.788 P00 DEBUG: info/manifest::manifestBuildComplete: => void
2020-11-18 23:29:02.788 P00 DEBUG: command/backup/backup::backupArchiveCheckCopy: (manifest: {Manifest}, walSegmentSize: 16777216)
2020-11-18 23:29:02.788 P00 INFO: check archive for segment(s) 0000000100023FE0000000BD:0000000100023FEF00000023
2020-11-18 23:29:02.789 P00 DEBUG: command/backup/backup::backupManifestSaveCopy: (manifest: {Manifest})
2020-11-18 23:29:02.789 P00 DEBUG: storage/storage::storageNewWrite: (this: {type: posix, path: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01"}, write: true}, fileExp: {"<REPO:BACKUP>/20201118-180002F/backup.manifest.copy"}, param.modeFile: 0000, param.modePath: 0000, param.user: null, param.group: null, param.timeModified: 0, param.noCreatePath: false, param.noSyncFile: false, param.noSyncPath: false, param.noAtomic: false, param.compressible: false)
2020-11-18 23:29:02.789 P00 DEBUG: storage/storage::storageNewWrite: => {type: posix, name: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/backup/prodbackup01/20201118-180002F/backup.manifest.copy"}, modeFile: 0640, modePath: 0750, createPath: true, syncFile: true, syncPath: true, atomic: true}
2020-11-18 23:29:02.789 P00 DEBUG: info/manifest::manifestSave: (this: {Manifest}, write: {IoWrite})
2020-11-18 23:29:02.796 P00 DEBUG: info/info::infoSave: (this: {Info}, write: {IoWrite}, callbackFunction: (function *), callbackData: *void)
2020-11-18 23:29:03.235 P00 DEBUG: info/info::infoSave: => void
2020-11-18 23:29:03.236 P00 DEBUG: info/manifest::manifestSave: => void
Backup Info:
stanza: prodbackup01
status: ok
cipher: none
db (current)
wal archive min/max (12-1): 0000000100023DF0000000D6/000000010002403F000000B5
full backup: 20201114-180002F
timestamp start/stop: 2020-11-14 18:00:02 / 2020-11-14 23:28:37
wal start/stop: 0000000100023DF0000000D6 / 0000000100023DFE000000D8
database size: 5888.6GB, backup size: 5888.6GB
repository size: 1515.2GB, repository backup size: 1515.2GB
incr backup: 20201114-180002F_20201115-180002I
timestamp start/stop: 2020-11-15 18:00:02 / 2020-11-15 19:21:31
wal start/stop: 0000000100023E4D0000001A / 0000000100023E5900000060
database size: 6052.6GB, backup size: 988.8GB
repository size: 1539.5GB, repository backup size: 191.6GB
backup reference list: 20201114-180002F
incr backup: 20201114-180002F_20201116-180002I
timestamp start/stop: 2020-11-16 18:00:02 / 2020-11-16 20:31:42
wal start/stop: 0000000100023EF3000000C3 / 0000000100023F17000000C1
database size: 6047.2GB, backup size: 2192.7GB
repository size: 1537.6GB, repository backup size: 512.2GB
backup reference list: 20201114-180002F, 20201114-180002F_20201115-180002I
full backup: 20201118-180002F
timestamp start/stop: 2020-11-18 18:00:02 / 2020-11-18 23:29:02
wal start/stop: 0000000100023FE0000000BD / 0000000100023FEF00000023
database size: 6025.5GB, backup size: 6025.5GB
repository size: 1531.8GB, repository backup size: 1531.8GB
Incremental backup completed successfully. Below is a log snippet where pg_stop_backup() was called.
Issue can be closed. Many thanks to the pgbackrest team for a robust and reliable solution.
2020-11-19 19:27:35.670 P00 DEBUG: command/backup/backup::backupProcess: => void
2020-11-19 19:27:35.670 P00 DEBUG: command/backup/backup::backupStop: (backupData: {BackupData}, manifest: {Manifest})
2020-11-19 19:27:35.670 P00 INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2020-11-19 19:27:35.670 P00 DEBUG: db/db::dbBackupStop: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null})
2020-11-19 19:27:35.670 P00 DEBUG: db/db::dbQueryRow: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null}, query: {"select lsn::text as lsn,
pg_catalog.pg_walfile_name(lsn)::text as wal_segment_name,
labelfile::text as backuplabel_file,
spcmapfile::text as tablespacemap_file
from pg_catalog.pg_stop_backup(false, false)"})
2020-11-19 19:27:35.670 P00 DEBUG: db/db::dbQuery: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null}, query: {"select lsn::text as lsn,
pg_catalog.pg_walfile_name(lsn)::text as wal_segment_name,
labelfile::text as backuplabel_file,
spcmapfile::text as tablespacemap_file
from pg_catalog.pg_stop_backup(false, false)"})
2020-11-19 19:27:35.670 P00 DEBUG: postgres/client::pgClientQuery: (this: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, query: {"select lsn::text as lsn,
pg_catalog.pg_walfile_name(lsn)::text as wal_segment_name,
labelfile::text as backuplabel_file,
spcmapfile::text as tablespacemap_file
from pg_catalog.pg_stop_backup(false, false)"})
2020-11-19 19:27:35.771 P00 DEBUG: postgres/client::pgClientQuery: => {VariantList}
2020-11-19 19:27:35.771 P00 DEBUG: db/db::dbQuery: => {VariantList}
2020-11-19 19:27:35.771 P00 DEBUG: db/db::dbQueryRow: => {VariantList}
2020-11-19 19:27:35.771 P00 DEBUG: db/db::dbBackupStop: => {DbBackupStopResult}
2020-11-19 19:27:35.771 P00 DEBUG: command/backup/backup::backupTime: (backupData: {BackupData}, waitRemainder: false)
2020-11-19 19:27:35.771 P00 DEBUG: db/db::dbTimeMSec: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null})
2020-11-19 19:27:35.771 P00 DEBUG: db/db::dbQueryColumn: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null}, query: {"select (extract(epoch from clock_timestamp()) * 1000)::bigint"})
2020-11-19 19:27:35.771 P00 DEBUG: db/db::dbQuery: (this: {client: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, remoteClient: null}, query: {"select (extract(epoch from clock_timestamp()) * 1000)::bigint"})
2020-11-19 19:27:35.771 P00 DEBUG: postgres/client::pgClientQuery: (this: {host: null, port: 5444, database: {"postgres"}, user: {"enterprisedb"}, queryTimeout 1800000}, query: {"select (extract(epoch from clock_timestamp()) * 1000)::bigint"})
2020-11-19 19:27:35.871 P00 DEBUG: postgres/client::pgClientQuery: => {VariantList}
2020-11-19 19:27:35.871 P00 DEBUG: db/db::dbQuery: => {VariantList}
2020-11-19 19:27:35.871 P00 DEBUG: db/db::dbQueryColumn: => {1605806855772}
2020-11-19 19:27:35.871 P00 DEBUG: db/db::dbTimeMSec: => 1605806855772
2020-11-19 19:27:35.871 P00 DEBUG: command/backup/backup::backupTime: => 1605806855
2020-11-19 19:27:35.871 P00 INFO: backup stop archive = 000000010002405100000091, lsn = 24051/9167CC40
2020-11-19 19:27:35.871 P00 DEBUG: command/backup/backup::backupFilePut: (backupData: {BackupData}, manifest: {Manifest}, name: {"backup_label"}, timestamp: 1605806855, content: {"START WAL LOCATION: 24043/CF0000D8 (file 0000000100024043000000CF)
CHECKPOINT LOCATION: 24043/CF000110
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2020-11-19 18:00:04 SAST
LABEL: pgBackRest backup started at 19-NOV-20 18:00:04.009361 +02:00
START TIMELINE: 1
"})
2020-11-19 19:27:35.871 P00 DEBUG: storage/storage::storageNewWrite: (this: {type: posix, path: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01"}, write: true}, fileExp: {"<REPO:BACKUP>/20201118-180002F_20201119-180002I/pg_data/backup_label.gz"}, param.modeFile: 0000, param.modePath: 0000, param.user: null, param.group: null, param.timeModified: 0, param.noCreatePath: false, param.noSyncFile: false, param.noSyncPath: false, param.noAtomic: false, param.compressible: true)
2020-11-19 19:27:35.871 P00 DEBUG: storage/storage::storageNewWrite: => {type: posix, name: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/backup/prodbackup01/20201118-180002F_20201119-180002I/pg_data/backup_label.gz"}, modeFile: 0640, modePath: 0750, createPath: true, syncFile: true, syncPath: true, atomic: true}
2020-11-19 19:27:35.871 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: (this: {inputSame: false, done: false}, filter: {type: hash})
2020-11-19 19:27:35.871 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: => {inputSame: false, done: false}
2020-11-19 19:27:35.871 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: (this: {inputSame: false, done: false}, filter: {type: gzCompress})
2020-11-19 19:27:35.871 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: => {inputSame: false, done: false}
2020-11-19 19:27:35.871 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: (this: {inputSame: false, done: false}, filter: {type: size})
2020-11-19 19:27:35.872 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: => {inputSame: false, done: false}
2020-11-19 19:27:35.872 P00 DEBUG: storage/storage::storagePut: (file: {type: posix, name: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/backup/prodbackup01/20201118-180002F_20201119-180002I/pg_data/backup_label.gz"}, modeFile: 0640, modePath: 0750, createPath: true, syncFile: true, syncPath: true, atomic: true}, buffer: {used: 271, size: 271})
2020-11-19 19:27:35.875 P00 DEBUG: storage/storage::storagePut: => void
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: (this: {inputSame: false, done: true}, filterType: {"size"})
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: => {211}
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: (this: {inputSame: false, done: true}, filterType: {"hash"})
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: => {"49b328372e630586e5b49a91f0316ed7137c4641"}
2020-11-19 19:27:35.875 P00 DETAIL: wrote 'backup_label' file returned from pg_stop_backup()
2020-11-19 19:27:35.875 P00 DEBUG: command/backup/backup::backupFilePut: => void
2020-11-19 19:27:35.875 P00 DEBUG: command/backup/backup::backupFilePut: (backupData: {BackupData}, manifest: {Manifest}, name: {"tablespace_map"}, timestamp: 1605806855, content: {"16426 /dbpool/edb/as12/other_pg_tablespaces
"})
2020-11-19 19:27:35.875 P00 DEBUG: storage/storage::storageNewWrite: (this: {type: posix, path: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01"}, write: true}, fileExp: {"<REPO:BACKUP>/20201118-180002F_20201119-180002I/pg_data/tablespace_map.gz"}, param.modeFile: 0000, param.modePath: 0000, param.user: null, param.group: null, param.timeModified: 0, param.noCreatePath: false, param.noSyncFile: false, param.noSyncPath: false, param.noAtomic: false, param.compressible: true)
2020-11-19 19:27:35.875 P00 DEBUG: storage/storage::storageNewWrite: => {type: posix, name: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/backup/prodbackup01/20201118-180002F_20201119-180002I/pg_data/tablespace_map.gz"}, modeFile: 0640, modePath: 0750, createPath: true, syncFile: true, syncPath: true, atomic: true}
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: (this: {inputSame: false, done: false}, f
2020-11-19 19:27:35.875 P00 DETAIL: wrote 'backup_label' file returned from pg_stop_backup()
2020-11-19 19:27:35.875 P00 DEBUG: command/backup/backup::backupFilePut: => void
2020-11-19 19:27:35.875 P00 DEBUG: command/backup/backup::backupFilePut: (backupData: {BackupData}, manifest: {Manifest}, name: {"tablespace_map"}, timestamp: 1605806855, content: {"16426 /dbpool/edb/as12/other_pg_tablespaces
"})
2020-11-19 19:27:35.875 P00 DEBUG: storage/storage::storageNewWrite: (this: {type: posix, path: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01"}, write: true}, fileExp: {"<REPO:BACKUP>/20201118-180002F_20201119-180002I/pg_d
ata/tablespace_map.gz"}, param.modeFile: 0000, param.modePath: 0000, param.user: null, param.group: null, param.timeModified: 0, param.noCreatePath: false, param.noSyncFile: false, param.noSyncPath: false, param.noAtomic: false, pa
ram.compressible: true)
2020-11-19 19:27:35.875 P00 DEBUG: storage/storage::storageNewWrite: => {type: posix, name: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/backup/prodbackup01/20201118-180002F_20201119-180002I/pg_data/tablespace_map.gz"}, m
odeFile: 0640, modePath: 0750, createPath: true, syncFile: true, syncPath: true, atomic: true}
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: (this: {inputSame: false, done: false}, filter: {type: hash})
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: => {inputSame: false, done: false}
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: (this: {inputSame: false, done: false}, filter: {type: gzCompress})
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: => {inputSame: false, done: false}
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: (this: {inputSame: false, done: false}, filter: {type: size})
2020-11-19 19:27:35.875 P00 DEBUG: common/io/filter/group::ioFilterGroupAdd: => {inputSame: false, done: false}
2020-11-19 19:27:35.875 P00 DEBUG: storage/storage::storagePut: (file: {type: posix, name: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/backup/prodbackup01/20201118-180002F_20201119-180002I/pg_data/tablespace_map.gz"}, mo
deFile: 0640, modePath: 0750, createPath: true, syncFile: true, syncPath: true, atomic: true}, buffer: {used: 44, size: 44})
2020-11-19 19:27:35.877 P00 DEBUG: storage/storage::storagePut: => void
2020-11-19 19:27:35.877 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: (this: {inputSame: false, done: true}, filterType: {"size"})
2020-11-19 19:27:35.877 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: => {64}
2020-11-19 19:27:35.877 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: (this: {inputSame: false, done: true}, filterType: {"hash"})
2020-11-19 19:27:35.877 P00 DEBUG: common/io/filter/group::ioFilterGroupResult: => {"18fc422b054b027835c5e71b53be773ac1b24c11"}
2020-11-19 19:27:35.877 P00 DETAIL: wrote 'tablespace_map' file returned from pg_stop_backup()
2020-11-19 19:27:35.877 P00 DEBUG: command/backup/backup::backupFilePut: => void
2020-11-19 19:27:35.877 P00 DEBUG: command/backup/backup::backupStop: => {BackupStopResult}
2020-11-19 19:27:35.877 P00 DEBUG: info/manifest::manifestBuildComplete: (this: {Manifest}, timestampStart: 1605801602, lsnStart: {"24043/CF0000D8"}, archiveStart: {"0000000100024043000000CF"}, timestampStop: 1605806855, lsnStop: {"24051/9167CC40"}, archiveStop: {"000000010002405100000091"}, pgId: 1, pgSystemId: 6881918819606537286, dbList: {VariantList}, optionArchiveCheck: true, optionArchiveCopy: false, optionBufferSize: 1048576, optionCompressLevel: 6, optionCompressLevelNetwork: 3, optionHardLink: false, optionProcessMax: 16, optionStandby: false)
2020-11-19 19:27:35.877 P00 DEBUG: info/manifest::manifestBuildComplete: => void
2020-11-19 19:27:35.877 P00 DEBUG: command/backup/backup::backupArchiveCheckCopy: (manifest: {Manifest}, walSegmentSize: 16777216)
2020-11-19 19:27:35.878 P00 INFO: check archive for segment(s) 0000000100024043000000CF:000000010002405100000091
2020-11-19 19:27:35.878 P00 DEBUG: command/backup/backup::backupManifestSaveCopy: (manifest: {Manifest})
2020-11-19 19:27:35.878 P00 DEBUG: storage/storage::storageNewWrite: (this: {type: posix, path: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01"}, write: true}, fileExp: {"<REPO:BACKUP>/20201118-180002F_20201119-180002I/backup.manifest.copy"}, param.modeFile: 0000, param.modePath: 0000, param.user: null, param.group: null, param.timeModified: 0, param.noCreatePath: false, param.noSyncFile: false, param.noSyncPath: false, param.noAtomic: false, param.compressible: false)
2020-11-19 19:27:35.878 P00 DEBUG: storage/storage::storageNewWrite: => {type: posix, name: {"/backup2/pgbackrest_bkpdir/cssa-stb-prod-edb-01/backup/prodbackup01/20201118-180002F_20201119-180002I/backup.manifest.copy"}, modeFile: 0640, modePath: 0750, createPath: true, syncFile: true, syncPath: true, atomic: true}
2020-11-19 19:27:35.878 P00 DEBUG: info/manifest::manifestSave: (this: {Manifest}, write: {IoWrite})
2020-11-19 19:27:35.884 P00 DEBUG: info/info::infoSave: (this: {Info}, write: {IoWrite}, callbackFunction: (function *), callbackData: *void)
Backup info:
stanza: prodbackup01
status: ok
cipher: none
db (current)
wal archive min/max (12-1): 0000000100023DF0000000D6/000000010002405100000091
full backup: 20201114-180002F
timestamp start/stop: 2020-11-14 18:00:02 / 2020-11-14 23:28:37
wal start/stop: 0000000100023DF0000000D6 / 0000000100023DFE000000D8
database size: 5888.6GB, backup size: 5888.6GB
repository size: 1515.2GB, repository backup size: 1515.2GB
incr backup: 20201114-180002F_20201115-180002I
timestamp start/stop: 2020-11-15 18:00:02 / 2020-11-15 19:21:31
wal start/stop: 0000000100023E4D0000001A / 0000000100023E5900000060
database size: 6052.6GB, backup size: 988.8GB
repository size: 1539.5GB, repository backup size: 191.6GB
backup reference list: 20201114-180002F
incr backup: 20201114-180002F_20201116-180002I
timestamp start/stop: 2020-11-16 18:00:02 / 2020-11-16 20:31:42
wal start/stop: 0000000100023EF3000000C3 / 0000000100023F17000000C1
database size: 6047.2GB, backup size: 2192.7GB
repository size: 1537.6GB, repository backup size: 512.2GB
backup reference list: 20201114-180002F, 20201114-180002F_20201115-180002I
full backup: 20201118-180002F
timestamp start/stop: 2020-11-18 18:00:02 / 2020-11-18 23:29:02
wal start/stop: 0000000100023FE0000000BD / 0000000100023FEF00000023
database size: 6025.5GB, backup size: 6025.5GB
repository size: 1531.8GB, repository backup size: 1531.8GB
incr backup: 20201118-180002F_20201119-180002I
timestamp start/stop: 2020-11-19 18:00:02 / 2020-11-19 19:27:35
wal start/stop: 0000000100024043000000CF / 000000010002405100000091
database size: 6094.7GB, backup size: 1210.0GB
repository size: 1546.3GB, repository backup size: 248.9GB
backup reference list: 20201118-180002F
Good, looks like it was transient. I'm going to keep this open so we can improve that error message, though.