pruning is not working as excepted (missing days)
hello,
i've some pruning issues on the replication destination.. the source looks good so far:
/sbin/zfs list -t snapshot |grep "wolke7"
nc_storage/clouddata@wolke7_20220617_091500_000 1.21G - 17.1T -
nc_storage/clouddata@wolke7_20220618_091500_000 2.46G - 17.1T -
nc_storage/clouddata@wolke7_20220619_091500_000 1.61G - 17.1T -
nc_storage/clouddata@wolke7_20220620_091500_000 3.83G - 17.1T -
nc_storage/clouddata@wolke7_20220621_091503_000 1.55G - 17.2T -
nc_storage/clouddata@wolke7_20220622_094507_000 1.90G - 17.2T -
nc_storage/clouddata@wolke7_20220623_101510_000 1.54G - 17.2T -
nc_storage/clouddata@wolke7_20220624_110012_000 3.98G - 17.2T -
nc_storage/clouddata@wolke7_20220625_110013_000 1.27G - 17.2T -
nc_storage/clouddata@wolke7_20220626_114515_000 1.43G - 17.2T -
nc_storage/clouddata@wolke7_20220627_124516_000 2.54G - 17.2T -
nc_storage/clouddata@wolke7_20220628_124516_000 1.85G - 17.2T -
nc_storage/clouddata@wolke7_20220629_124517_000 2.49G - 17.2T -
nc_storage/clouddata@wolke7_20220630_084519_000 218M - 17.2T -
nc_storage/clouddata@wolke7_20220630_094519_000 351M - 17.2T -
nc_storage/clouddata@wolke7_20220630_104519_000 37.3M - 17.2T -
nc_storage/clouddata@wolke7_20220630_114519_000 31.7M - 17.2T -
nc_storage/clouddata@wolke7_20220630_124519_000 984M - 17.2T -
nc_storage/clouddata@wolke7_20220630_151519_000 233M - 17.2T -
nc_storage/clouddata@wolke7_20220630_171520_000 365M - 17.2T -
nc_storage/clouddata@wolke7_20220630_183020_000 247M - 17.2T -
nc_storage/clouddata@wolke7_20220630_211519_000 291M - 17.2T -
nc_storage/clouddata@wolke7_20220630_221519_000 125M - 17.2T -
nc_storage/clouddata@wolke7_20220630_231519_000 54.9M - 17.2T -
nc_storage/clouddata@wolke7_20220701_001520_000 21.1M - 17.2T -
nc_storage/clouddata@wolke7_20220701_011519_000 64.6M - 17.2T -
nc_storage/clouddata@wolke7_20220701_021519_000 14.9M - 17.2T -
nc_storage/clouddata@wolke7_20220701_031520_000 13.3M - 17.2T -
nc_storage/clouddata@wolke7_20220701_041519_000 75.1M - 17.2T -
nc_storage/clouddata@wolke7_20220701_051519_000 17.9M - 17.2T -
nc_storage/clouddata@wolke7_20220701_061520_000 188M - 17.2T -
nc_storage/clouddata@wolke7_20220701_071519_000 41.8M - 17.2T -
nc_storage/clouddata@wolke7_20220701_081519_000 21.8M - 17.2T -
nc_storage/clouddata@wolke7_20220701_083019_000 4.02M - 17.2T -
nc_storage/clouddata@wolke7_20220701_084520_000 4.49M - 17.2T -
nc_storage/clouddata@wolke7_20220701_090019_000 562M - 17.2T -
nc_storage/clouddata@wolke7_20220701_091519_000 4.05M - 17.2T -
nc_storage/localdata@wolke7_20220617_090000_000 2.64M - 2.25T -
nc_storage/localdata@wolke7_20220618_090000_000 192K - 2.25T -
nc_storage/localdata@wolke7_20220619_090000_000 380K - 2.25T -
nc_storage/localdata@wolke7_20220620_091500_000 1.46M - 2.25T -
nc_storage/localdata@wolke7_20220621_091503_000 4.64M - 2.25T -
nc_storage/localdata@wolke7_20220622_091508_000 4.24M - 2.25T -
nc_storage/localdata@wolke7_20220623_091510_000 17.4M - 2.25T -
nc_storage/localdata@wolke7_20220624_093012_000 1.90M - 2.25T -
nc_storage/localdata@wolke7_20220625_100013_000 104K - 2.25T -
nc_storage/localdata@wolke7_20220626_101515_000 104K - 2.25T -
nc_storage/localdata@wolke7_20220627_101516_000 1008K - 2.25T -
nc_storage/localdata@wolke7_20220628_110016_000 3.14M - 2.25T -
nc_storage/localdata@wolke7_20220629_111517_000 2.77M - 2.25T -
nc_storage/localdata@wolke7_20220630_084519_000 1.07M - 2.25T -
nc_storage/localdata@wolke7_20220630_094519_000 640K - 2.25T -
nc_storage/localdata@wolke7_20220630_104519_000 236K - 2.25T -
nc_storage/localdata@wolke7_20220630_114520_000 408K - 2.25T -
nc_storage/localdata@wolke7_20220630_124520_000 564K - 2.25T -
nc_storage/localdata@wolke7_20220630_134520_000 708K - 2.25T -
nc_storage/localdata@wolke7_20220630_144520_000 104K - 2.25T -
nc_storage/localdata@wolke7_20220630_154519_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220630_164520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220630_174520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220630_184520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220630_194520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220630_204520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220630_214520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220630_224520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220630_234520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220701_004520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220701_014520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220701_024520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220701_034520_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220701_044520_000 104K - 2.25T -
nc_storage/localdata@wolke7_20220701_054520_000 768K - 2.25T -
nc_storage/localdata@wolke7_20220701_064520_000 1.05M - 2.25T -
nc_storage/localdata@wolke7_20220701_074520_000 184K - 2.25T -
nc_storage/localdata@wolke7_20220701_083020_000 136K - 2.25T -
nc_storage/localdata@wolke7_20220701_084520_000 136K - 2.25T -
nc_storage/localdata@wolke7_20220701_090020_000 0B - 2.25T -
nc_storage/localdata@wolke7_20220701_091520_000 0B - 2.25T -
nc_storage/mariadb@wolke7_20220617_080000_000 1.25G - 11.1G -
nc_storage/mariadb@wolke7_20220618_081500_000 1.17G - 11.1G -
nc_storage/mariadb@wolke7_20220619_081500_000 1.18G - 11.1G -
nc_storage/mariadb@wolke7_20220620_090000_000 1.22G - 11.1G -
nc_storage/mariadb@wolke7_20220621_091503_000 1.28G - 11.1G -
nc_storage/mariadb@wolke7_20220622_094508_000 1.25G - 11.1G -
nc_storage/mariadb@wolke7_20220623_094510_000 1.23G - 11.1G -
nc_storage/mariadb@wolke7_20220624_094512_000 1.24G - 11.1G -
nc_storage/mariadb@wolke7_20220625_103013_000 1.21G - 11.1G -
nc_storage/mariadb@wolke7_20220626_103015_000 1.22G - 11.1G -
nc_storage/mariadb@wolke7_20220627_103016_000 1.25G - 11.1G -
nc_storage/mariadb@wolke7_20220628_103016_000 1.26G - 11.1G -
nc_storage/mariadb@wolke7_20220629_113017_000 1.26G - 11.1G -
nc_storage/mariadb@wolke7_20220630_083019_000 1.14G - 11.1G -
nc_storage/mariadb@wolke7_20220630_093019_000 1.10G - 11.1G -
nc_storage/mariadb@wolke7_20220630_103019_000 1.09G - 11.1G -
nc_storage/mariadb@wolke7_20220630_113020_000 1.10G - 11.1G -
nc_storage/mariadb@wolke7_20220630_123020_000 1.15G - 11.1G -
nc_storage/mariadb@wolke7_20220630_133020_000 1.10G - 11.1G -
nc_storage/mariadb@wolke7_20220630_143020_000 1.10G - 11.1G -
nc_storage/mariadb@wolke7_20220630_153020_000 1.08G - 11.1G -
nc_storage/mariadb@wolke7_20220630_171519_000 1.07G - 11.1G -
nc_storage/mariadb@wolke7_20220630_181520_000 1.05G - 11.1G -
nc_storage/mariadb@wolke7_20220630_191520_000 1.04G - 11.1G -
nc_storage/mariadb@wolke7_20220630_201520_000 1.04G - 11.1G -
nc_storage/mariadb@wolke7_20220630_211520_000 1.05G - 11.1G -
nc_storage/mariadb@wolke7_20220630_220020_000 1.04G - 11.1G -
nc_storage/mariadb@wolke7_20220630_231520_000 1.02G - 11.1G -
nc_storage/mariadb@wolke7_20220701_001520_000 1.02G - 11.1G -
nc_storage/mariadb@wolke7_20220701_011520_000 1.03G - 11.1G -
nc_storage/mariadb@wolke7_20220701_021520_000 1.04G - 11.1G -
nc_storage/mariadb@wolke7_20220701_031519_000 1.03G - 11.1G -
nc_storage/mariadb@wolke7_20220701_041520_000 1.03G - 11.1G -
nc_storage/mariadb@wolke7_20220701_051520_000 1.03G - 11.1G -
nc_storage/mariadb@wolke7_20220701_061520_000 1.05G - 11.1G -
nc_storage/mariadb@wolke7_20220701_080019_000 1.09G - 11.1G -
nc_storage/mariadb@wolke7_20220701_083020_000 1.07G - 11.1G -
nc_storage/mariadb@wolke7_20220701_084519_000 1.07G - 11.1G -
nc_storage/mariadb@wolke7_20220701_090020_000 1.08G - 11.1G -
nc_storage/mariadb@wolke7_20220701_091520_000 1.07G - 11.1G -
the config of the source is:
global:
logging:
# use syslog instead of stdout because it makes journald happy
- type: syslog
format: human
level: warn
jobs:
# this job takes care of snapshot creation + pruning
- name: zfs-servjob
type: snap
filesystems: {
"nc_storage/clouddata<": true,
"nc_storage/localdata<": true,
"nc_storage/mariadb<": true,
}
snapshotting:
type: periodic
interval: 15m
prefix: wolke7_
pruning:
keep:
- type: grid
grid: 1x1h(keep=all) | 24x1h | 14x1d
regex: "^wolke7_.*"
# keep all snapshots that don't have the `zrepl_` prefix
- type: regex
negate: true
regex: "^wolke7_.*"
- name: onsitebackup
type: source
serve:
type: tcp
listen: :8888
clients: {
"192.168.50.12" : "onsitebackup"
}
filesystems: {
"nc_storage/clouddata<": true,
"nc_storage/localdata<": true,
"nc_storage/mariadb<": true,
}
# Snapshots are handled by the separate snap job
snapshotting:
type: manual
but on our onsitebackup there are missing several days without any reason, here you can see the 21 is missing overall 22 is just in maraidb
backupdata/wolke7/nc_storage/clouddata@wolke7_20220616_091500_000 14.3G - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220617_091500_000 1.21G - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220618_094500_000 2.56G - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220620_081500_000 3.98G - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220623_061510_000 1.88G - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220625_050013_000 2.43G - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220628_023016_000 1.57G - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220630_003018_000 2.25G - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220701_001520_000 540M - 15.0T -
backupdata/wolke7/nc_storage/clouddata@wolke7_20220701_071519_000 0B - 15.1T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220618_100000_000 6.73M - 2.18T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220620_091500_000 3.13M - 2.18T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220623_071510_000 27.5M - 2.18T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220625_060013_000 420K - 2.18T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220627_041515_000 420K - 2.18T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220628_040016_000 6.52M - 2.18T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220630_023018_000 7.60M - 2.18T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220701_013020_000 1.86M - 2.18T -
backupdata/wolke7/nc_storage/localdata@wolke7_20220701_083020_000 0B - 2.18T -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220616_091500_000 1.85G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220618_093000_000 1.46G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220620_081500_000 1.48G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220622_064507_000 1.58G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220624_053012_000 1.50G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220626_033014_000 1.53G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220628_013016_000 1.62G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220629_233018_000 1.57G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220630_231520_000 1.32G - 15.7G -
backupdata/wolke7/nc_storage/mariadb@wolke7_20220701_070020_000 0B - 15.7G -
the config looks for onsite looks like:
global:
logging:
# use syslog instead of stdout because it makes journald happy
- type: syslog
format: human
level: warn
jobs:
- name: onsite_backup
type: pull
connect:
type: tcp
address: "zfs-serv-nc.comapny.local:8888"
root_fs: "backupdata/wolke7"
interval: 1h
recv:
placeholder:
encryption: off
pruning:
keep_sender:
# Source does the pruning in its snap job
- type: regex
regex: '.*'
keep_receiver:
- type: grid
# 35tage - täglich | 13monate | 2 jahre
grid: 35x1d | 13x30d | 2x365d
regex: "wolke7_.*"
- name: offsite_listen
type: source
serve:
type: tcp
listen: :8889
clients: {
"192.168.50.11" : "offsite"
}
filesystems: {
"backupdata/wolke7/nc_storage/clouddata": true,
"backupdata/wolke7/nc_storage/localdata": true,
"backupdata/wolke7/nc_storage/mariadb": true,
}
snapshotting:
type: manual
the same issue is on the offsite backup location:
NAME USED AVAIL REFER MOUNTPOINT
offsite/wolke7/backupdata/wolke7/nc_storage/clouddata@wolke7_20220620_091500_000 21.7G - 17.5T -
offsite/wolke7/backupdata/wolke7/nc_storage/clouddata@wolke7_20220622_064507_000 5.23G - 17.5T -
offsite/wolke7/backupdata/wolke7/nc_storage/clouddata@wolke7_20220625_030013_000 1.92G - 17.5T -
offsite/wolke7/backupdata/wolke7/nc_storage/clouddata@wolke7_20220627_003015_000 2.43G - 17.5T -
offsite/wolke7/backupdata/wolke7/nc_storage/clouddata@wolke7_20220629_203017_000 2.39G - 17.5T -
offsite/wolke7/backupdata/wolke7/nc_storage/clouddata@wolke7_20220630_193019_000 813M - 17.5T -
offsite/wolke7/backupdata/wolke7/nc_storage/clouddata@wolke7_20220701_071519_000 0B - 17.6T -
offsite/wolke7/backupdata/wolke7/nc_storage/localdata@wolke7_20220620_091500_000 47.2M - 2.25T -
offsite/wolke7/backupdata/wolke7/nc_storage/localdata@wolke7_20220623_051510_000 27.3M - 2.25T -
offsite/wolke7/backupdata/wolke7/nc_storage/localdata@wolke7_20220625_030013_000 439K - 2.25T -
offsite/wolke7/backupdata/wolke7/nc_storage/localdata@wolke7_20220626_231515_000 439K - 2.25T -
offsite/wolke7/backupdata/wolke7/nc_storage/localdata@wolke7_20220627_230016_000 6.79M - 2.25T -
offsite/wolke7/backupdata/wolke7/nc_storage/localdata@wolke7_20220629_194517_000 8.28M - 2.25T -
offsite/wolke7/backupdata/wolke7/nc_storage/localdata@wolke7_20220630_183020_000 1.87M - 2.25T -
offsite/wolke7/backupdata/wolke7/nc_storage/localdata@wolke7_20220701_083020_000 0B - 2.25T -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220620_091500_000 1.81G - 22.0G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220621_084503_000 1.58G - 22.0G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220623_063010_000 1.62G - 22.1G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220625_033013_000 1.58G - 22.1G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220626_231515_000 1.54G - 22.1G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220628_193017_000 1.71G - 22.1G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220630_171519_000 1.40G - 22.1G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220701_070020_000 0B - 22.1G -
and the config is:
global:
logging:
# use syslog instead of stdout because it makes journald happy
- type: syslog
format: human
level: warn
jobs:
- name: onsite_backup
type: pull
connect:
type: tcp
address: "zfs-serv-nc.company.local:8888"
root_fs: "backupdata/wolke7"
interval: 1h
recv:
placeholder:
encryption: off
pruning:
keep_sender:
# Source does the pruning in its snap job
- type: regex
regex: '.*'
keep_receiver:
- type: grid
# 35tage - täglich | 13monate | 2 jahre
grid: 35x1d | 13x30d | 2x365d
regex: "wolke7_.*"
- name: datengrab2_listen
type: source
serve:
type: tcp
listen: :8889
clients: {
"192.168.50.11" : "datengrab"
}
filesystems: {
"backupdata/wolke7/nc_storage/clouddata": true,
"backupdata/wolke7/nc_storage/localdata": true,
"backupdata/wolke7/nc_storage/mariadb": true,
}
snapshotting:
type: manual
any ideas why are daily pruning are not working?
Does adding a keep=all for the first 1d bucket help?
I.e., grid: 1x1d(keep=all)|34x1d|13x30d|2x365d
we'll try that and give feedback in ~2weeks
Hello, on our onsite-backup it works, but on the offsite backup don't. (offsite get data from onsite) offsite(missing 20220814, 20220812.. ) :
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220803_043103_000 1.43G - 22.2G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220804_043105_000 1.48G - 22.2G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220805_043108_000 1.53G - 22.2G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220806_043110_000 1.44G - 22.2G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220808_041616_000 1.48G - 22.2G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220810_023120_000 1.68G - 22.2G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220811_023124_000 1.66G - 22.2G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220813_014632_000 1.57G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220815_000141_000 1.57G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220816_223149_000 1.52G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220817_223155_000 1.48G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220818_223159_000 1.61G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220819_223203_000 1.35G - 22.3G -
[...]
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_030210_000 1.17G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_031710_000 1.16G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_033210_000 1.16G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_034709_000 1.16G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_040210_000 1.16G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_041710_000 1.17G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_043209_000 1.17G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_044710_000 1.17G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_050210_000 1.18G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_051710_000 1.17G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_053210_000 1.17G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_054710_000 1.17G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_060209_000 1.17G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_061710_000 1.19G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_063210_000 1.19G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_064710_000 1.18G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_070210_000 1.19G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_071710_000 1.20G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_073210_000 1.21G - 22.3G -
offsite/wolke7/backupdata/wolke7/nc_storage/mariadb@wolke7_20220822_074710_000 1.25G - 22.3G -
- Your config confuses me. Is it a chained setup (prod => onsite => offsite) or is it a star topology (prod => onsite, prod => offsite). Regardless, it doesn't make sense that there are 3 sources in the configs that you listed (
onsitebackup,offsite_listen,datengrab2_listen) - You are pulling at a 1h hour interval. But how long do your replications actually take? If they take longer than 1 hour, say, worst case, 1 day, then my recommendation to front it with a
1dkeepinterval (previous comment) won't be sufficient. Looking at the snapshot list that you posted, it seems to have helped, though. Right?
yes it's a chained setup, if you think there is something wrong, please tell me, but I saw the offsite config which I posted is wrong, that is the correct one:
global:
logging:
# use syslog instead of stdout because it makes journald happy
- type: syslog
format: human
level: warn
jobs:
- name: offsite_backup
type: pull
connect:
type: tcp
address: "datengrab2.company.local:8889"
root_fs: "offsite/wolke7"
interval: 2h
recv:
placeholder:
encryption: off
pruning:
keep_sender:
# Source does the pruning in its own job
- type: regex
regex: '.*'
keep_receiver:
- type: grid
# 35tage - täglich | 13monate
grid: 1x1d(keep=all)|35x1d | 13x30d
regex: "wolke7_.*"
the replication takes just a few minutes. and yes it the keep-ting helped, but there are still missing days in the offsite location