sanoid
sanoid copied to clipboard
Sanoid not pruning?
I have sanoid set to keep 2 monthlies and 1 yearly, but I have 12 monthlies and 12 yearlies currently. I run sanoid from the following script manually weekly/biweekly when I plug in my external ZFS drive for backups:
#!/bin/bash
zpool import -a \
&& zpool status -v \
&& zfs mount -l my_zpool/my_dataset \
&& /usr/bin/time -p rsync -aiPh /home/pc/Documents/ /my_zpool/my_dataset/Documents \
&& sanoid --debug --cron > sanoid_snapshot_log \
&& zfs list -t snapshot >> sanoid_snapshot_log \
&& zfs list >> sanoid_snapshot_log \
&& zfs unmount my_zpool/my_dataset \
&& zpool export -a
Here's the result from sanoid --debug --prune-snapshots
:
DEBUG: initializing $config{my_zpool/my_dataset} with default values from /usr/share/sanoid/sanoid.defaults.conf.
DEBUG: overriding frequently on my_zpool/my_dataset with value from user-defined template template_production.
DEBUG: overriding hourly on my_zpool/my_dataset with value from user-defined template template_production.
DEBUG: overriding daily on my_zpool/my_dataset with value from user-defined template template_production.
DEBUG: overriding weekly on my_zpool/my_dataset with value from user-defined template template_production.
DEBUG: overriding monthly on my_zpool/my_dataset with value from user-defined template template_production.
DEBUG: overriding yearly on my_zpool/my_dataset with value from user-defined template template_production.
DEBUG: overriding autosnap on my_zpool/my_dataset with value from user-defined template template_production.
DEBUG: overriding autoprune on my_zpool/my_dataset with value from user-defined template template_production.
DEBUG: getting list of child datasets on my_zpool/my_dataset using zfs list -o name -t filesystem,volume -Hr my_zpool/my_dataset |...
****** CONFIGS ******
$VAR1 = {
'my_zpool/my_dataset' => {
'autoprune' => 1,
'autosnap' => 1,
'capacity_crit' => '95',
'capacity_warn' => '80',
'daily' => '3',
'daily_crit' => '32h',
'daily_hour' => '23',
'daily_min' => '59',
'daily_warn' => '28h',
'force_post_snapshot_script' => '',
'frequent_period' => '15',
'frequently' => '0',
'frequently_crit' => '0',
'frequently_warn' => '0',
'hourly' => '0',
'hourly_crit' => '360m',
'hourly_min' => '0',
'hourly_warn' => '90m',
'monitor' => 1,
'monitor_dont_crit' => 0,
'monitor_dont_warn' => 0,
'monthly' => '2',
'monthly_crit' => '40d',
'monthly_hour' => '0',
'monthly_mday' => '1',
'monthly_min' => '0',
'monthly_warn' => '32d',
'no_inconsistent_snapshot' => '',
'path' => 'my_zpool/my_dataset',
'post_snapshot_script' => '',
'pre_snapshot_script' => '',
'prune_defer' => '0',
'pruning_script' => '',
'script_timeout' => '5',
'skip_children' => '',
'weekly' => '1',
'weekly_crit' => '0',
'weekly_hour' => '23',
'weekly_min' => '30',
'weekly_warn' => '0',
'weekly_wday' => '1',
'yearly' => '1',
'yearly_crit' => '0',
'yearly_hour' => '0',
'yearly_mday' => '1',
'yearly_min' => '0',
'yearly_mon' => '1',
'yearly_warn' => '0'
}
};
Filesystem my_zpool/my_dataset has:
28 total snapshots (newest: 0.3 hours old)
12 monthly
desired: 2
newest: 173.6 hours old, named autosnap_2022-06-09_16:17:39_monthly
3 daily
desired: 3
newest: 0.3 hours old, named autosnap_2022-06-17_01:30:10_daily
1 weekly
desired: 1
newest: 0.3 hours old, named autosnap_2022-06-17_01:30:10_weekly
12 yearly
desired: 1
newest: 173.6 hours old, named autosnap_2022-06-09_16:17:39_yearly
INFO: pruning snapshots...
Result from zfs list -t all -r my_zpool/my_dataset
:
NAME USED AVAIL REFER MOUNTPOINT
my_zpool/my_dataset 168G 732G 138G /my_zpool/my_dataset
my_zpool/my_dataset@autosnap_2022-04-16_20:31:35_yearly 0B - 120G -
my_zpool/my_dataset@autosnap_2022-04-16_20:31:35_monthly 0B - 120G -
my_zpool/my_dataset@autosnap_2022-04-17_19:15:03_yearly 0B - 120G -
my_zpool/my_dataset@autosnap_2022-04-17_19:15:03_monthly 0B - 120G -
my_zpool/my_dataset@autosnap_2022-05-01_01:30:47_yearly 200K - 119G -
my_zpool/my_dataset@autosnap_2022-05-01_01:30:47_monthly 200K - 119G -
my_zpool/my_dataset@autosnap_2022-05-07_15:53:02_yearly 0B - 119G -
my_zpool/my_dataset@autosnap_2022-05-07_15:53:02_monthly 0B - 119G -
my_zpool/my_dataset@autosnap_2022-05-24_18:09:31_yearly 0B - 137G -
my_zpool/my_dataset@autosnap_2022-05-24_18:09:31_monthly 0B - 137G -
my_zpool/my_dataset@autosnap_2022-05-27_18:46:43_yearly 0B - 137G -
my_zpool/my_dataset@autosnap_2022-05-27_18:46:43_monthly 0B - 137G -
my_zpool/my_dataset@autosnap_2022-06-04_22:57:27_yearly 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-04_22:57:27_monthly 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-04_23:46:33_yearly 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-04_23:46:33_monthly 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-06_21:15:08_yearly 488K - 136G -
my_zpool/my_dataset@autosnap_2022-06-06_21:15:08_monthly 296K - 136G -
my_zpool/my_dataset@autosnap_2022-06-06_17:46:40_yearly 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-06_17:46:40_monthly 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-07_16:25:42_yearly 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-07_16:25:42_monthly 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-08_00:00:28_daily 0B - 136G -
my_zpool/my_dataset@autosnap_2022-06-09_16:17:39_yearly 0B - 137G -
my_zpool/my_dataset@autosnap_2022-06-09_16:17:39_monthly 0B - 137G -
my_zpool/my_dataset@autosnap_2022-06-09_16:17:39_daily 0B - 137G -
my_zpool/my_dataset@autosnap_2022-06-17_01:30:10_weekly 28.0M - 137G -
my_zpool/my_dataset@autosnap_2022-06-17_01:30:10_daily 320K - 137G -
Using sanoid version 2.1.0-1
I have the same issue on an Ubuntu 20.04 LTS.
I got a total of 9 weekly's here as an example, and i have set it to 4.
Seems to be creating them, but not removing them.
(This is consistent on all my dataset's)
This is my sanoid.conf
# DataSet's on this device
[BackUpAndSync/SyncThing_MyFiles/Benchmarks]
use_template = production
[BackUpAndSync/SyncThing_MyFiles/Mine PCer og Enheder]
use_template = production
[BackUpAndSync/SyncThing_MyFiles/OneDrive]
use_template = production
[BackUpAndSync/SyncThing_MyFiles/Programming Windows]
use_template = production
[BackUpAndSync/SyncThing_MyFiles/Smart BackUp]
use_template = production
[BackUpAndSync/SyncThing_MyFiles/Syncthing]
use_template = production
[BackUpAndSync/SyncThing_MyFiles/knowledge_database]
use_template = production
# Syncoid BackUp on RPI4 "Storage"
[BackUpAndSync/Billeder]
use_template = syncoidbackup
[BackUpAndSync/Cyberlink]
use_template = syncoidbackup
[BackUpAndSync/DV Camcorder Backup]
use_template = syncoidbackup
[BackUpAndSync/Movies]
use_template = syncoidbackup
[BackUpAndSync/Music]
use_template = syncoidbackup
# Syncoid BackUp Docker Containers on RPI4 "Docker-Files-ZFS"
[BackUpAndSync/Docker/Airsonic]
use_template = syncoidbackup
[BackUpAndSync/Docker/Booksonic]
use_template = syncoidbackup
[BackUpAndSync/Docker/Heimdall]
use_template = syncoidbackup
[BackUpAndSync/Docker/HomeAssistant]
use_template = syncoidbackup
[BackUpAndSync/Docker/JellyFin]
use_template = syncoidbackup
[BackUpAndSync/Docker/NextCloud]
use_template = syncoidbackup
[BackUpAndSync/Docker/PhotoPrism]
use_template = syncoidbackup
[ackUpAndSync/Docker/Portainer]
use_template = syncoidbackup
[BackUpAndSync/Docker/RustDesk]
use_template = syncoidbackup
[BackUpAndSync/Docker/SyncThing]
use_template = syncoidbackup
[BackUpAndSync/Docker/VaultWarden]
use_template = syncoidbackup
[BackUpAndSync/Docker/WatchTower]
use_template = syncoidbackup
#############################
# templates below this line #
#############################
[template_production]
frequently = 0
hourly = 36
daily = 7
weekly = 4
monthly = 12
yearly = 2
autosnap = yes
autoprune = yes
[template_syncoidbackup]
### remove outdated snapshots
autoprune = yes
### don't take snapshots as they are coming from syncoid
autosnap = no
### define what to keep
frequently = 0
hourly = 36
daily = 7
weekly = 4
monthly = 12
yearly = 2
This is an example of a current sanpshotted DatSet, with no pruning done.
zfs list -t snap BackUpAndSync/Docker/Airsonic
NAME USED AVAIL REFER MOUNTPOINT
BackUpAndSync/Docker/Airsonic@autosnap_2021-11-22_10:43:31_yearly 0B - 4,88M -
BackUpAndSync/Docker/Airsonic@autosnap_2021-11-22_10:43:31_monthly 0B - 4,88M -
BackUpAndSync/Docker/Airsonic@autosnap_2021-11-22_10:43:31_weekly 0B - 4,88M -
BackUpAndSync/Docker/Airsonic@autosnap_2021-11-22_10:43:31_daily 0B - 4,88M -
BackUpAndSync/Docker/Airsonic@autosnap_2021-11-22_10:43:31_hourly 0B - 4,88M -
BackUpAndSync/Docker/Airsonic@autosnap_2021-12-01_23:01:13_monthly 1,37M - 4,90M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-01_23:01:19_yearly 0B - 4,90M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-01_23:01:19_monthly 0B - 4,90M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-04_23:00:44_hourly 1,32M - 4,89M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-05_23:01:42_hourly 1,31M - 4,89M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-06_23:01:35_hourly 1,32M - 4,91M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-07_23:01:24_daily 0B - 5,19M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-07_23:01:24_hourly 0B - 5,19M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-08_23:02:00_daily 0B - 4,92M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-08_23:02:00_hourly 0B - 4,92M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-09_23:00:56_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-09_23:00:56_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-10_23:00:56_daily 0B - 5,05M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-10_23:00:56_hourly 0B - 5,05M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-11_23:01:16_weekly 0B - 5,06M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-11_23:01:16_daily 0B - 5,06M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-11_23:01:16_hourly 0B - 5,06M -
BackUpAndSync/Docker/[email protected] 1,24M - 5,09M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-12_23:01:08_daily 0B - 5,06M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-12_23:01:08_hourly 0B - 5,06M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-13_23:01:05_daily 0B - 5,03M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-13_23:01:05_hourly 0B - 5,03M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-14_23:00:33_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-14_23:00:33_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-15_23:01:23_daily 0B - 5,05M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-15_23:01:23_hourly 0B - 5,05M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-16_23:01:09_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-16_23:01:09_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-17_23:01:13_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-17_23:01:13_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-18_23:01:19_weekly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-18_23:01:19_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-18_23:01:19_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-19_23:01:20_daily 0B - 5,06M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-19_23:01:20_hourly 0B - 5,06M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-20_23:01:21_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-20_23:01:21_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-21_23:00:38_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-21_23:00:38_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-22_23:01:27_daily 0B - 5,05M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-22_23:01:27_hourly 0B - 5,05M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-23_23:01:20_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-23_23:01:20_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-24_23:00:45_daily 0B - 5,02M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-24_23:00:45_hourly 0B - 5,02M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-25_23:00:41_weekly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-25_23:00:41_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-25_23:00:41_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-26_23:00:38_daily 60K - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-26_23:00:38_hourly 60K - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-27_23:00:58_daily 60K - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-27_23:00:58_hourly 60K - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-28_23:00:44_daily 0B - 5,07M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-28_23:00:44_hourly 0B - 5,07M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-29_19:15:59_daily 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-29_19:15:59_hourly 0B - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-29_20:02:42_hourly 1,18M - 5,04M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-30_06:45:10_daily 0B - 5,10M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-30_06:45:10_hourly 0B - 5,10M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-30_23:01:16_hourly 292K - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-31_23:01:02_daily 0B - 5,08M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-01-31_23:01:02_hourly 0B - 5,08M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-01_23:01:18_monthly 0B - 5,10M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-01_23:01:18_weekly 0B - 5,10M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-01_23:01:18_daily 0B - 5,10M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-01_23:01:18_hourly 0B - 5,10M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-02_23:00:49_daily 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-02_23:00:49_hourly 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-03_23:00:49_daily 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-03_23:00:49_hourly 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-04_23:00:55_daily 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-04_23:00:55_hourly 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-05_23:00:33_daily 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-05_23:00:33_hourly 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-06_23:00:39_daily 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-02-06_23:00:39_hourly 0B - 5,11M -
BackUpAndSync/Docker/Airsonic@AirsonicUpgraded-2022-02-27_09_31 1,41M - 5,27M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-03-01_00:00:07_monthly 1,35M - 5,23M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-04-01_00:00:06_monthly 1,39M - 5,31M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-04-26_10_21_45 68K - 5,29M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-04-26_10_22_45 68K - 5,29M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-04-26_10_41_38 60K - 5,29M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-04-26_10_43_52 60K - 5,29M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-04-26_10_45_46 60K - 5,29M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-04-27_16_45_01 1,33M - 5,28M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-05-01_00:00:14_monthly 1,33M - 5,43M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-05-04_16_45_01 1,34M - 5,44M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-05-11_16_45_02 1,33M - 5,40M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-05-18_16_45_02 1,32M - 5,43M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-05-25_16_45_01 1,31M - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-05-30_23:30:06_weekly 1,32M - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-01_00:00:16_monthly 1,32M - 5,43M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-06-01_16_45_01 1,32M - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-06_23:30:07_weekly 1,32M - 5,43M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-06-08_16_45_02 1,32M - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-13_23:30:06_weekly 1,32M - 5,43M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-06-15_16_45_01 1,31M - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-17_00:00:04_daily 1,33M - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-18_00:00:11_daily 1,17M - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-19_00:00:03_daily 1,17M - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-20_00:00:11_daily 1,33M - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-20_23:30:07_weekly 60K - 5,48M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-21_00:00:11_daily 60K - 5,48M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_00:00:09_daily 1,33M - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_01:00:12_hourly 68K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_02:00:11_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_03:00:08_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_04:00:07_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_05:00:09_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_06:00:11_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_07:00:07_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_08:00:10_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_09:00:09_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_10:00:09_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_11:00:11_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_12:00:07_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_13:00:10_hourly 60K - 5,41M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_14:00:08_hourly 60K - 5,43M -
BackUpAndSync/Docker/Airsonic@SnapBeforeWatchTower-Date2022-06-22_16_45_02 60K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_15:00:11_hourly 88K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_16:00:11_hourly 60K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_17:00:06_hourly 60K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_18:00:08_hourly 92K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_19:00:06_hourly 92K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_20:00:09_hourly 60K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_21:00:10_hourly 60K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_22:15:06_hourly 92K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-22_23:00:08_hourly 60K - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_00:00:09_daily 0B - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_00:00:09_hourly 0B - 5,43M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_01:00:10_hourly 60K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_02:00:09_hourly 60K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_03:00:05_hourly 60K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_04:00:01_hourly 72K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_05:00:09_hourly 72K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_06:00:07_hourly 72K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_07:00:07_hourly 60K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_08:00:10_hourly 60K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_09:00:07_hourly 72K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_10:00:07_hourly 72K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_11:00:04_hourly 76K - 5,42M -
BackUpAndSync/Docker/Airsonic@autosnap_2022-06-23_12:00:08_hourly 60K - 5,42M -
BackUpAndSync/Docker/Airsonic@syncoid_zboxri531-lubuntults_2022-06-23:14:01:23-GMT02:00 0B - 5,42M -
Sanoid and Syncoid will not do any Pruning on another Host as i understood it from other "Issues". For local Backups you can use another Sanoid-Template. And on another Host you should also install Sanoid and do pruning with such a Template as far as i can remeber :-).
But as i have a TrueNAS(currently Scale) as my Target-System i wasn't really sure if i will get Sanoid running without a problem. So i just wrote a little Script. Maybe it will fit your needs with a few changes.
https://gist.github.com/crpb/06d157dba1ef191a50b1b641463d19d2
Sanoid and Syncoid will not do any Pruning on another Host as i understood it from other "Issues". For local Backups you can use another Sanoid-Template. And on another Host you should also install Sanoid and do pruning with such a Template as far as i can remeber :-).
I'm doing everything locally in my case (it's just an external drive hooked up by usb-c). Also this theory wouldn't hold true in my case anyway because sanoid is keeping the desired amount of dailies and weeklies, but not monthlies/yearlies.
I am also talking about the received end (Locally).
At the location where my snapshots ends up "Sanoid-Template" with pruning enabled doesn't prune the received snapshots. They are just filling up on the drive. Even with CoW it will eventually be to many snapshots if they dont get cleaned up.
Sorry for the confusion..
Did you try the the following?
sanoid --debug --force-update
sanoid --prune-snapshots --force-prune --verbose --debug
I can not reproduce it here at least with frequents/hourly :-) ... VERSION=2.1.0
@PaperOrb According to the comment in sanoid.defaults.conf, pruning is performed as follows:
if autoprune is set, we will prune any snapshots of that type which are older than (setting * periodicity) - so if daily = 90, we'll prune any dailies older than 90 days.
Therefore, with yearly = 1 and monthly = 2 in sanoid.conf, sanoid does not limit itself to keeping 1 yearly snapshot and 2 monthly snapshots (which I think is what you're assuming how it should work), but rather prunes all yearly snapshots that are older than 1 year and prunes all monthly snapshots that are older than 2 months*. It thus seems to be working as intended, since all your snapshots are less than 2 months old**.
I think the problem in your case is rather that these excess monthly and yearly snapshots are generated in the first place. Not sure why this happens on your end, as sanoid should notice that e.g. a yearly snapshot that is more recent than as of 2022-01-01 00:00 already exists, and hence it shouldn't create a new one.
* While making sure that at least 1 yearly and 2 monthly snapshots exist, respectively; the comments in the source code make this clear:
if we say "daily=30" we really mean "don't keep any dailies more than 30 days old", etc
but if we say "daily=30" we ALSO mean "don't get rid of ANY dailies unless we have more than 30".
** When checking which monthly snapshots to prune, sanoid defines one month as 31 days, which is why the oldest monthly snapshot from 2022-04-16 was not yet pruned. Relevant section in the source code:
# we need to set the period length in seconds first
if ($type eq 'frequently') { $period = 60 * $config{$section}{'frequent_period'}; }
elsif ($type eq 'hourly') { $period = 60*60; }
elsif ($type eq 'daily') { $period = 60*60*24; }
elsif ($type eq 'weekly') { $period = 60*60*24*7; }
elsif ($type eq 'monthly') { $period = 60*60*24*31; }
elsif ($type eq 'yearly') { $period = 60*60*24*365.25; }
@jan-krieg this is what my snapshots look like now (only daily is working correctly now):
Filesystem my_zpool/my_dataset has:
68 total snapshots (newest: 0.0 hours old)
35 yearly
desired: 1
newest: 0.0 hours old, named autosnap_2022-08-09_15:30:00_yearly
5 weekly
desired: 1
newest: 0.0 hours old, named autosnap_2022-08-09_15:30:00_weekly
3 daily
desired: 3
newest: 0.0 hours old, named autosnap_2022-08-09_15:30:00_daily
25 monthly
desired: 2
newest: 0.0 hours old, named autosnap_2022-08-09_15:30:00_monthly
Is it possible that using ext4 on my laptop and ZFS on the external drive that I'm backing up to is what's causing the issue? I also run this backup script manually every day rather than as a cron job.
I allso seem to still have problems.
As a short im posting sanoid template and the Dataset i am snapshotting pluz the full list of snapshots
[BackUpAndSync/Billeder]
use_template = syncoidbackup
[template_syncoidbackup]
### remove outdated snapshots
autoprune = yes
### don't take snapshots as they are coming from syncoid
autosnap = no
### define what to keep
frequently = 0
hourly = 36
daily = 7
weekly = 4
monthly = 12
yearly = 2
zfs list -t snap BackUpAndSync/Billeder
NAME USED AVAIL REFER MOUNTPOINT
BackUpAndSync/Billeder@syncoid_hl5kj37ad-Aspire-TC-115_2021-08-19:10:57:39-GMT02:00 3,99M - 322G -
BackUpAndSync/Billeder@autosnap_2021-08-28_07:00:23_yearly 0B - 322G -
BackUpAndSync/Billeder@autosnap_2021-08-28_07:00:23_monthly 0B - 322G -
BackUpAndSync/Billeder@autosnap_2021-09-02_16:50:46_monthly 0B - 322G -
BackUpAndSync/Billeder@autosnap_2021-10-01_22:00:34_monthly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-03_22:00:30_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-04_22:00:32_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-05_22:02:22_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-06_22:00:33_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-07_22:00:36_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-08_22:00:33_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-08_22:00:33_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-09_22:00:10_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-09_22:00:10_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-10_22:00:30_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-10_22:00:30_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-11_22:00:30_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-11_22:00:30_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-12_22:00:32_weekly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-12_22:00:32_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-12_22:00:32_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-13_22:00:28_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-13_22:00:28_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-14_22:00:25_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-14_22:00:25_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-15_22:00:29_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-15_22:00:29_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-16_22:00:30_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-16_22:00:30_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-17_22:00:34_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-17_22:00:34_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-18_22:00:39_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-18_22:00:39_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-19_22:00:34_weekly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-19_22:00:34_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-19_22:00:34_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-20_22:00:37_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-20_22:00:37_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-21_22:00:36_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-21_22:00:36_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-22_22:00:35_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-22_22:00:35_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-23_22:00:33_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-23_22:00:33_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-24_22:00:32_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-24_22:00:32_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-25_22:00:33_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-25_22:00:33_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-26_22:00:31_weekly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-26_22:00:31_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-26_22:00:31_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-27_22:00:31_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-27_22:00:31_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-28_22:00:29_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-28_22:00:29_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-29_22:00:37_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-29_22:00:37_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-30_22:00:33_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-30_22:00:33_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-31_23:00:29_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-10-31_23:00:29_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-01_23:00:33_monthly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-01_23:00:33_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-01_23:00:33_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-02_23:00:30_weekly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-02_23:00:30_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-02_23:00:30_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-03_23:00:28_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-03_23:00:28_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-04_23:00:31_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-04_23:00:31_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-05_23:00:32_daily 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-05_23:00:32_hourly 0B - 327G -
BackUpAndSync/Billeder@autosnap_2021-11-06_23:00:36_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2021-11-06_23:00:36_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2021-11-07_23:00:34_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2021-11-07_23:00:34_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2021-12-01_23:00:52_monthly 144K - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-01_23:01:24_yearly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-01_23:01:24_monthly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-04_23:00:30_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-05_23:01:45_hourly 144K - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-06_23:01:35_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-07_23:01:02_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-07_23:01:02_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-08_23:01:23_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-08_23:01:23_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-09_23:00:37_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-09_23:00:37_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-10_23:01:00_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-10_23:01:00_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-11_23:00:49_weekly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-11_23:00:49_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-11_23:00:49_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-12_23:00:57_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-12_23:00:57_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-13_23:01:10_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-13_23:01:10_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-14_23:00:35_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-14_23:00:35_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-15_23:01:18_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-15_23:01:18_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-16_23:00:53_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-16_23:00:53_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-17_23:01:15_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-17_23:01:15_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-18_23:01:00_weekly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-18_23:01:00_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-18_23:01:00_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-19_23:01:14_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-19_23:01:14_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-20_23:01:17_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-20_23:01:17_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-21_23:00:37_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-21_23:00:37_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-22_23:01:26_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-22_23:01:26_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-23_23:01:00_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-23_23:01:00_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-24_23:00:33_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-24_23:00:33_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-25_23:00:49_weekly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-25_23:00:49_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-25_23:00:49_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-26_23:00:43_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-26_23:00:43_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-27_23:00:25_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-27_23:00:25_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-28_23:00:43_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-28_23:00:43_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-29_19:16:03_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-29_19:16:03_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-29_20:02:47_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-30_06:45:10_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-30_06:45:10_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-30_23:01:20_hourly 144K - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-31_23:01:27_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-01-31_23:01:27_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-01_23:00:49_monthly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-01_23:00:49_weekly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-01_23:00:49_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-01_23:00:49_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-02_23:00:41_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-02_23:00:41_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-03_23:00:29_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-03_23:00:29_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-04_23:00:50_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-04_23:00:50_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-05_23:00:59_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-05_23:00:59_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-06_23:01:06_daily 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-02-06_23:01:06_hourly 0B - 328G -
BackUpAndSync/Billeder@autosnap_2022-03-01_23:00:29_monthly 144K - 328G -
BackUpAndSync/Billeder@autosnap_2022-04-01_22:00:24_monthly 144K - 328G -
BackUpAndSync/Billeder@autosnap_2022-05-01_22:00:25_monthly 920K - 330G -
BackUpAndSync/Billeder@autosnap_2022-05-21_22:00:23_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-22_22:00:30_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-23_22:00:03_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-24_22:00:31_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-25_22:00:24_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-26_22:00:23_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-27_22:00:27_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-28_22:00:29_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-29_22:00:24_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-30_22:00:22_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-31_22:00:26_weekly 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-05-31_22:00:26_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-01_22:00:24_monthly 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-01_22:00:24_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-02_22:00:22_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-03_22:00:29_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-04_22:00:23_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-05_22:00:23_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-06_22:00:27_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-08_22:00:24_weekly 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-08_22:00:24_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-09_22:00:30_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-10_22:00:25_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-11_22:00:25_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-12_22:00:22_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-13_22:00:22_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-14_22:00:25_weekly 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-14_22:00:25_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-15_22:00:23_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-16_22:00:28_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-18_22:00:22_daily 144K - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-19_22:00:26_daily 144K - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-20_22:00:22_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-21_22:00:24_weekly 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-21_22:00:24_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-22_22:00:25_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-23_22:00:27_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-24_22:00:23_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-25_22:00:22_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-27_22:00:21_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-28_22:00:23_weekly 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-28_22:00:23_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-29_22:00:22_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-06-30_22:00:22_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-01_22:00:24_monthly 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-01_22:00:24_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-02_14:13:52_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-03_00:00:23_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-04_00:00:02_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-04_23:30:02_weekly 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-05_00:00:03_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-06_00:00:07_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-07_00:00:27_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-08_00:00:26_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-09_00:00:26_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-10_00:00:12_daily 0B - 331G -
BackUpAndSync/Billeder@autosnap_2022-07-11_00:00:21_daily 0B - 331G -
BackUpAndSync/Billeder@syncoid_zboxri531-lubuntults_2022-07-11:09:10:10-GMT02:00 0B - 331G -
Now taking it from my template I should only have 36 hourly, if i am counting correctly i have a total of 71 hourly. I should only have 7 dailys, if again im counting it correctly i have 102 dailys Weekly should be 4, i got 13 weeklys Monthly should be 12, i got 11. But this is because i have only had the Dataset for under a year so far. So the Monthly havent built up yet. Yearly i should have 2 i got 1. But this is because i have only had the Dataset for under a year so far. So the Yeearly havent built up yet.
But as can be seen its not actually pruning Hourly, Daily, Weekly (and probably not mothly or yearly when i get to that).
I have this issue on 2 seperate backup servers. Sanoid installed and configured on the 2 Separate computers arent actually pruning anything.
One of them is an Ubuntu 22.04. And the other is an Ubuntu 20.04
On both computer i got sanoid version
/usr/sbin/sanoid version 2.1.0
(Getopt::Long::GetOptions version 2.50; Perl version 5.30.0)
Hope this is somewhat helpfull.
@Darkyere
The following command will show you the Info's like here.
sudo sanoid --debug |sed -n '/Filesystem.*has:/,/^$/p'
So, no manual counting should be necessary.
Terrible sry. about that. I actually allso felt it a bit wrong making such a long post.
Here is the desired version of the output.
Filesystem BackUpAndSync/Billeder has:
211 total snapshots (newest: 750.1 hours old)
12 monthly
desired: 12
newest: 968.1 hours old, named autosnap_2022-07-01_22:00:24_monthly
72 hourly
desired: 36
newest: 4447.1 hours old, named autosnap_2022-02-06_23:01:06_hourly
2 yearly
desired: 2
newest: 5311.1 hours old, named autosnap_2022-01-01_23:01:24_yearly
14 weekly
desired: 4
newest: 894.6 hours old, named autosnap_2022-07-04_23:30:02_weekly
111 daily
desired: 7
newest: 750.1 hours old, named autosnap_2022-07-11_00:00:21_daily
@Darkyere did you try the commands here? I'm just guessing that because this isn't a permanently imported volume you would need to forcibly update the index so a prune does his work as intended...
This should very well be a permanently imported volume.
My only difference from the origional poster is that i have installed Ubuntu on the PC's and then created the Pools with internal drives with SATA connections, that never gets pulled from the PC.
Commands where similar to
Mini PC
zpool create Storage mirror DiskByID DiskByID
Desktop PC
zpool create Storage raidz2 DiskByID DiskByID DiskByID DiskByID
Im a bit curious if u still believe i should do these commands,
sanoid --debug --force-update
sanoid --prune-snapshots --force-prune --verbose --debug
I would kinda expect and suppose that Sanoid should work with snapshots normally with permanently added drives, even if they were received with syncoid. Just trying to figure out if there is a bug of some sort that needs fixing, or if i need to run these commands.
These commands won't delete what shouldn't be deleted. So there is no harm in trying.
--force-update Clears out sanoid's zfs snapshot cache
--prune-snapshots Purges expired snapshots as specified in sanoid.conf
--force-prune Purges expired snapshots even if a send/recv is in progress
You could leave out the force-prune..
It was just a guess that maybe the cache-file had some problems. After looking a bit through the code it seems the cache will be updated after 15 minutes anyway... grep -A1 snapshots.txt $(which sanoid)
.
Nonetheless, it won't destroy your zpool or datasets :sweat_smile:
@jan-krieg this is what my snapshots look like now (only daily is working correctly now):
Filesystem my_zpool/my_dataset has: 68 total snapshots (newest: 0.0 hours old) 35 yearly desired: 1 newest: 0.0 hours old, named autosnap_2022-08-09_15:30:00_yearly 5 weekly desired: 1 newest: 0.0 hours old, named autosnap_2022-08-09_15:30:00_weekly 3 daily desired: 3 newest: 0.0 hours old, named autosnap_2022-08-09_15:30:00_daily 25 monthly desired: 2 newest: 0.0 hours old, named autosnap_2022-08-09_15:30:00_monthly
Is it possible that using ext4 on my laptop and ZFS on the external drive that I'm backing up to is what's causing the issue? I also run this backup script manually every day rather than as a cron job.
@PaperOrb Looking at the commands in your original post, I don't see a reason why using ext4 on the laptop and updating the ZFS dataset via rsync should cause any issue (but maybe I'm missing something obvious). However, that you're now running the script every day seems "consistent" (given the problem as described in your original post) with the fact that only the number of daily snapshots is now correct.
Did you try out what happens if you run your backup script twice (or more) in a row? I.e., manually starting it again after it has finished. Based on the above, I would expect that sanoid creates a new daily, weekly, etc. snapshot, so that you then have e.g. a total of 4 instead of 3 daylies. And that each further run adds snapshots (instead of sanoid recognizing that there are already sufficiently recent snapshots). If true, maybe that allows you to debug this further and drill down into why the problem is occuring on your side.
@Darkyere I'd second @crpb's suggestion to run the two commands they mentioned and copy-paste the results here; might give some insight into why old snapshots are not pruned. And no harm done with your longer previous post; it was actually useful to also see the output of zfs list, as the timestamps suggest it's a different problem on your side.
I was sitting down today, right now as a matter of fact to run the two commands. But now for some reason the issue is has fixed itself for now.
The only thing in difference is that i just upgraded from Ubuntu 20.04.4 to 22.04.1. And during the process it somehow fixed the issue, i dont know if it updated the cache or something. I honestly have no clue.
All i now for now is that the issue has been fixed. I allmost feel sad that it fixed itself.
Would have liked to know the actual reason for this.
Just posting the sudo sanoid --debug |sed -n '/Filesystem.*has:/,/^$/p'
Filesystem BackUpAndSync/Billeder has:
61 total snapshots (newest: 991.0 hours old)
12 monthly
desired: 12
newest: 1209.0 hours old, named autosnap_2022-07-01_22:00:24_monthly
2 yearly
desired: 2
newest: 5552.0 hours old, named autosnap_2022-01-01_23:01:24_yearly
7 daily
desired: 7
newest: 991.0 hours old, named autosnap_2022-07-11_00:00:21_daily
36 hourly
desired: 36
newest: 4688.0 hours old, named autosnap_2022-02-06_23:01:06_hourly
4 weekly
desired: 4
newest: 1135.5 hours old, named autosnap_2022-07-04_23:30:02_weekly
So sry, we cant get to the actual issue atm. I for sure would have liked to know the cause for this. I can just hope it will continue to work on other PC's ass well.
I have a backup server at my old folks were i would like to see if this has actually worked aswell. But its 12 kilometers away, and im not going there until monday.
If the issue persists at my folks home i will write back here. But until then, well if there aint a problem any longer. There is no way to fix it ;)
Best regards, Darkyere
@Darkyere I'm on 22.04 as well but still facing this issue.
Yes that seems to prove there is still an error for some reason. It could be interesting for me to make a new Syncoid transfer and then see if it cleans up the the old snapshots when it receives new snapshots on my end. I will see to doing that. Cant just suppose the error just fixed itself. I would allso have liked to try the two commands, to see what it would have done.
Btw, i didnt mean to close the issue for you. Just thought the issue might have been over now for me. I atleast hope it is.
Just my 2 cents to add, for "too many snapshots" discussion:
Make sure the following lines are in your /etc/sandoid.conf
file under the correct section.
-
autoprune = yes
to make sure pruning is on. -
prune_defer = 0
to always prune.
# pruning can be skipped based on the used capacity of the pool
# (0: always prune, 1-100: only prune if used capacity is greater than this value)
prune_defer = 0
Just my 2 cents to add, for "too many snapshots" discussion: Make sure the following lines are in your
/etc/sandoid.conf
file under the correct section.* `autoprune = yes` to make sure pruning is on. * `prune_defer = 0` to always prune.
# pruning can be skipped based on the used capacity of the pool # (0: always prune, 1-100: only prune if used capacity is greater than this value) prune_defer = 0
I believe prune_defer fixed it. If I find out it's not actually fixed, I'll reopen this. Thanks!
I believe prune_defer fixed it. If I find out it's not actually fixed, I'll reopen this. Thanks!
Hm that seems surprising, given that prune_defer = 0
is set by default (in sanoid.defaults.conf), so there should be no need to set it manually in sanoid.conf unless the default has been modified; and according to the command output in your original post, prune_defer = 0
was indeed part of your employed config. As such, also setting this in sanoid.conf should not make a difference. Maybe there was some other change that actually solved the issue?
I believe prune_defer fixed it. If I find out it's not actually fixed, I'll reopen this. Thanks!
Hm that seems surprising, given that
prune_defer = 0
is set by default (in sanoid.defaults.conf), so there should be no need to set it manually in sanoid.conf unless the default has been modified; and according to the command output in your original post,prune_defer = 0
was indeed part of your employed config. As such, also setting this in sanoid.conf should not make a difference. Maybe there was some other change that actually solved the issue?
Just an update to say that this was correct. I'm still noticing sanoid keeping an excess amount of snapshots without pruning them even after switching to prune_defer = 0.
There's a whole wash of different people with different issues here, but it all amounts to "system is too overloaded" with various consequences.
If your system is too overloaded to work through zfs list -t snap
in between Sanoid runs, you can end up with multiple Sanoid instances all queued up and trying to update the cache simultaneously (which also makes your load problem even worse).
You can also end up taking too many snapshots, because Sanoid is extremely paranoid, and defaults to "I'd rather take it twice than not at all." If one instance took a snapshot but it hasn't made it into the cache because another instance is still trying to update the cache, then the next Sanoid run can't tell that the first one already took a snapshot to satisfy the need for another of that type (frequent, hourly, daily, etc as appropriate) and takes another one. And yes, the extra isn't going to disappear until it's old enough to be stale.
If the overloading is REALLY bad, you can also end up accumulating too many old snapshots which never get pruned even after being stale, because see above: if you can't update the cache, you don't know what's on the system, and you can't start destroying stale snapshots.
In most cases, the fix is as simple as running Sanoid less frequently: for example, */15 * * * * to only run every 15 minutes, or 0 * * * * to only run hourly.
However, if you've already accumulated so many extra snapshots that your system is struggling VERY badly to update the cache, you may need to do some manual cleanup:
- temporarily disable your Sanoid cron job
- kill off any remaining sanoid instances
- wait for any currently running
zfs list -t snap
to complete (you can't really kill these) - run
sanoid --force-update
- run
sanoid --cron --verbose
At this point, you'll have pruned any and all stale snapshots (though not duplicate snapshots which aren't old enough to be stale yet). Now, do a time zfs list -t snap > /dev/null
and make sure that your systemd task or cron job invokes Sanoid considerably less frequently than the time it took to pull that list of snapshots.
There is a certain point beyond which zfs list -t snap
becomes incredibly onerous to perform, and that point is generally measured in "total number of snapshots per dataset". Somewhere in the general vicinity of 100 snapshots or so, typically, although that number varies with types of drive (rust has a lot more trouble than flash), amount of system RAM, etc.
If your zfs list -t snap > /dev/null
AFTER you finish pruning stale snapshots and removing duplicates (either by waiting for them to become stale, or by manually removing them yourself) takes more than a few seconds to complete, I'd strongly advise paring down the number of snapshots you're attempting to keep.
Jim, might we want to update the documentation to not recommend running every minute? I think */15
would be a more reasonable default than the current *
.
@jimsalterjrs I only ever run sanoid manually via my script, so there's no chance it's being run too frequently in my case (most days I forget to even run it). Sanoid is actually only run when I run the command manually from the command line.
edit: also there's very few snapshots in my case.