pghoard icon indicating copy to clipboard operation
pghoard copied to clipboard

Change standalone basebackups

Open rdunklau opened this issue 3 years ago • 23 comments

In order to avoid the problems listed in #467, I propose this PR to change the standalone basebackups behaviour.

Instead of embedding all the required wals in the basebackup, this PR starts a walreceiver for the duration of the backup. This should resolve the problem of retrieving the WAL files using the fetch method, documented in the above-mentioned PR, without introducing the need for a new backup format.

rdunklau avatar Sep 13 '21 13:09 rdunklau

Is this PR supposed to be rebased on latest master, as there seems to be a commit with the same description? Refactor LSN

jason-adnuntius avatar Sep 14 '21 06:09 jason-adnuntius

So im not sure what I should be cherry picking onto latest master to test this?

jason-adnuntius avatar Sep 14 '21 06:09 jason-adnuntius

Sorry I must have done an error while rebasing on master, hence the duplicate commit message. I force-pushed the only interesting commit.

rdunklau avatar Sep 14 '21 06:09 rdunklau

cool, I will take a look at this tomorrow (melbourne oz time)!

jason-adnuntius avatar Sep 14 '21 07:09 jason-adnuntius

Hi,

Im testing this and getting errors upon trying to stop the backup.

Sep 15 02:38:45 d-vgt-005 pghoard[40235]:     cursor.execute("SELECT pg_start_backup(%s, true)", [backup_end_name])
Sep 15 02:38:45 d-vgt-005 pghoard[40235]: psycopg2.errors.ObjectNotInPrerequisiteState: a backup is already in progress
Sep 15 02:38:45 d-vgt-005 pghoard[40235]: HINT:  Run pg_stop_backup() and try again.

I think you need to be passing "non-exclusive" to the get_backup_end_segment_and_time call just before this line:

self.log.info("Will wait for walreceiver to stop")

Perhaps doing a similar test to run_local_tar_basebackup

"non-exclusive" if self.pg_version_server >= 90600 else None

Or similar

Of course this is an issue if the db is a primary, we don't encounter this if the db is in recovery. But I gather pghoard should be supporting both configurations.


So after making that change myself locally, my initial very simple tests confirm that db restores work with this approach. Im now going to deploy this to a production server for a while and see how that goes.

jason-adnuntius avatar Sep 15 '21 00:09 jason-adnuntius

On a production server I received this error when starting a new base backup, im not sure why, perhaps its just a setup issue?

On this particular server, we are streaming a backup from a standby not a primary db.

Sep 15 03:30:07 my-host pghoard[3821705]: pghoard MainThread INFO: Creating a new basebackup for 'cluster10' due to request
Sep 15 03:30:07 my-host pghoard[3821705]: WALReceiver Thread-32 INFO: WALReceiver initialized with replication_slot: None, last_flushed_lsn: LSN(215E/2E000000, server_version=100016>
Sep 15 03:30:07 my-host pghoard[3821705]: PGBaseBackup Thread-32 INFO: Starting walreceiver...
Sep 15 03:30:07 my-host pghoard[3821705]: PGBaseBackup Thread-32 INFO: Started: ['/usr/lib/postgresql/10/bin/pg_basebackup', '--format', 'tar', '--label', 'pghoard_base_backup', '-->
Sep 15 03:30:07 my-host pghoard[3821705]: Exception in thread Thread-33:
Sep 15 03:30:07 my-host pghoard[3821705]: Traceback (most recent call last):
Sep 15 03:30:07 my-host pghoard[3821705]:   File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
Sep 15 03:30:07 my-host pghoard[3821705]:     self.run()
Sep 15 03:30:07 my-host pghoard[3821705]:   File "/var/lib/postgresql/.local/lib/python3.8/site-packages/pghoard/walreceiver.py", line 163, in run
Sep 15 03:30:07 my-host pghoard[3821705]:     self.timeline_id = self.start_replication()
Sep 15 03:30:07 my-host pghoard[3821705]:   File "/var/lib/postgresql/.local/lib/python3.8/site-packages/pghoard/walreceiver.py", line 112, in start_replication
Sep 15 03:30:07 my-host pghoard[3821705]:     self.fetch_timeline_history_files(timeline)
Sep 15 03:30:07 my-host pghoard[3821705]:   File "/var/lib/postgresql/.local/lib/python3.8/site-packages/pghoard/walreceiver.py", line 84, in fetch_timeline_history_files
Sep 15 03:30:07 my-host pghoard[3821705]:     self.c.execute("TIMELINE_HISTORY {}".format(max_timeline))
Sep 15 03:30:07 my-host pghoard[3821705]: psycopg2.errors.UndefinedFile: could not open file "pg_wal/00000002.history": No such file or directory
Sep 15 03:30:07 my-host pghoard[3821705]: Compressor Thread-5 INFO: Stored and encrypted 91 byte of <_io.BytesIO object at 0x7f42783efa90> to 373 bytes, took: 0.001s
Sep 15 03:30:07 my-host pghoard[3821705]: TransferAgent Thread-18 INFO: Uploading memory-blob to object store: dst='cluster10/timeline/00000003.history'
Sep 15 03:30:07 my-host pghoard[3821705]: TransferAgent Thread-18 INFO: 'UPLOAD' transfer of key: 'cluster10/timeline/00000003.history', size: 373, origin: 'my-host' took 0.119s

Im not seeing any archiving of WAL files either, so im wondering if the exception caused the wal receiver to start but not properly????


ya so none of the WAL files were uploaded for the base backup I just did. I will trigger another one and see if there is a different result.

jason-adnuntius avatar Sep 15 '21 01:09 jason-adnuntius

In case its just a state issue with my server, I am going to go back to using pg_receivewal and make sure that is working, before again switching to standalone base backup.


No, so I got pghoard running with pg_receivewal and working fine.

I then stopped pghoard, reconfigured it for standalone_hot_backup, removed the xlog_incoming files and restarted pghoard and I got the same error above.


I must admit I don't understand the significance of this issue, but I did find a post that suggested I could just create that empty file.

https://fatdba.com/2020/10/20/could-not-send-replication-command-timeline_history-error-could-not-open-file-pg_wal-00xxxx-history/

So I will try that, but im not sure what the consequences will be.


So the wal receiver started up fine and I can see Xlogs being saved now, after the base backup finishes I will do a db restore to see if that works.


A missing timeline file is my fault? Or should pghoard be handling it a bit more gracefully????

jason-adnuntius avatar Sep 15 '21 01:09 jason-adnuntius

So im not sure why, but the start wal segment is wrong or else there is something wrong with how the walreceiver is configured, because there are wal segments missing between the start wal segment and the segments that were actually saved to storage.

The first WAL segment saved to storage is: 000000030000215E000000D4

Whereas the start-wal-segment is: 000000030000215E000000C9

It does however save up to the end wal segment correctly.


Im running a restore to see if it restores even without those missing WALs


Failed to restore:

pg_ctl: could not start server
Examine the log output.
2021-09-15 03:33:13.298 UTC [17734] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2021-09-15 03:33:13.298 UTC [17734] LOG:  listening on IPv6 address "::", port 5432
2021-09-15 03:33:13.299 UTC [17734] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-09-15 03:33:13.313 UTC [17735] LOG:  database system was interrupted while in recovery at log time 2021-09-15 02:21:58 UTC
2021-09-15 03:33:13.313 UTC [17735] HINT:  If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.
/usr/local/bin/pghoard_postgres_command: ERROR: '00000004.history' not found from archive
2021-09-15 03:33:13.572 UTC [17735] LOG:  starting point-in-time recovery to 2021-09-15 02:27:25+00
2021-09-15 03:33:13.737 UTC [17735] LOG:  restored log file "00000003.history" from archive
/usr/local/bin/pghoard_postgres_command: ERROR: '000000030000215E000000C9' not found from archive
/usr/local/bin/pghoard_postgres_command: ERROR: '000000020000215E000000C9' not found from archive
/usr/local/bin/pghoard_postgres_command: ERROR: '000000010000215E000000C9' not found from archive
2021-09-15 03:33:14.311 UTC [17735] LOG:  invalid checkpoint record
2021-09-15 03:33:14.311 UTC [17735] FATAL:  could not locate required checkpoint record
2021-09-15 03:33:14.311 UTC [17735] HINT:  If you are not restoring from a backup, try removing the file "/var/lib/postgresql/10/main/backup_label".
2021-09-15 03:33:14.312 UTC [17734] LOG:  startup process (PID 17735) exited with exit code 1
2021-09-15 03:33:14.312 UTC [17734] LOG:  aborting startup due to startup process failure
2021-09-15 03:33:14.313 UTC [17734] LOG:  database system is shut down
pg_ctl: could not start server

jason-adnuntius avatar Sep 15 '21 03:09 jason-adnuntius

In case its of use, here is the log of the base backup, perhaps we need to wait a little while after the wal receiver is started before the pg base backup is initiated?

Sep 15 04:20:17 my-host systemd[1]: pghoard.service: Succeeded.
Sep 15 04:20:17 my-host systemd[1]: Stopped PostgreSQL streaming backup service.
Sep 15 04:20:52 my-host systemd[1]: Starting PostgreSQL streaming backup service...
Sep 15 04:20:53 my-host systemd[1]: Started PostgreSQL streaming backup service.
Sep 15 04:20:53 my-host pghoard[3831622]: pghoard MainThread INFO: pghoard initialized, own_hostname: 'my-host', cwd: '/var/lib/pghoard'
Sep 15 04:21:28 my-host pghoard[3831622]: 116.202.132.23 - - [15/Sep/2021 04:21:03] "GET /metrics HTTP/1.1" 200 -
Sep 15 04:21:28 my-host pghoard[3831622]: 127.0.0.1 - - [15/Sep/2021 04:21:27] "PUT /cluster10/basebackup HTTP/1.1" 201 -
Sep 15 04:21:28 my-host pghoard[3831622]: pghoard MainThread INFO: Creating a new basebackup for 'cluster10' due to request
Sep 15 04:21:28 my-host pghoard[3831622]: WALReceiver Thread-32 INFO: WALReceiver initialized with replication_slot: None, last_flushed_lsn: LSN(215E/D4000000, server_version=100016, timeline_id=3)
Sep 15 04:21:28 my-host pghoard[3831622]: PGBaseBackup Thread-32 INFO: Starting walreceiver...
Sep 15 04:21:28 my-host pghoard[3831622]: PGBaseBackup Thread-32 INFO: Started: ['/usr/lib/postgresql/10/bin/pg_basebackup', '--format', 'tar', '--label', 'pghoard_base_backup', '--verbose', '--pgdata', '-', >
Sep 15 04:21:28 my-host pghoard[3831622]: WALReceiver Thread-33 INFO: Starting replication from '215E/D4000000', timeline: 3 with slot: None
Sep 15 04:21:28 my-host pghoard[3831622]: Compressor Thread-4 INFO: Stored and encrypted 91 byte of <_io.BytesIO object at 0x7fe6c4709900> to 373 bytes, took: 0.018s
Sep 15 04:21:28 my-host pghoard[3831622]: Compressor Thread-11 INFO: Stored and encrypted 0 byte of <_io.BytesIO object at 0x7fe6c4709ef0> to 305 bytes, took: 0.026s
Sep 15 04:21:28 my-host pghoard[3831622]: TransferAgent Thread-18 INFO: Uploading memory-blob to object store: dst='cluster10/timeline/00000002.history'
Sep 15 04:21:28 my-host pghoard[3831622]: TransferAgent Thread-19 INFO: Uploading memory-blob to object store: dst='cluster10/timeline/00000003.history'
Sep 15 04:21:29 my-host pghoard[3831622]: TransferAgent Thread-18 INFO: 'UPLOAD' transfer of key: 'cluster10/timeline/00000002.history', size: 305, origin: 'my-host' took 0.133s
Sep 15 04:21:29 my-host pghoard[3831622]: TransferAgent Thread-19 INFO: 'UPLOAD' transfer of key: 'cluster10/timeline/00000003.history', size: 373, origin: 'my-host' took 0.143s
Sep 15 04:21:31 my-host pghoard[3831622]: Compressor Thread-8 INFO: Compressed and encrypted 16777216 byte of <_io.BytesIO object at 0x7fe6a07a6310> to 7247897 bytes (43%), took: 0.139s
Sep 15 04:21:31 my-host pghoard[3831622]: TransferAgent Thread-17 INFO: Uploading memory-blob to object store: dst='cluster10/xlog/000000030000215E000000D4'
Sep 15 04:21:31 my-host pghoard[3831622]: TransferAgent Thread-17 INFO: 'UPLOAD' transfer of key: 'cluster10/xlog/000000030000215E000000D4', size: 7247897, origin: 'my-host' took 0.267s
Sep 15 04:21:31 my-host pghoard[3831622]: WALReceiver Thread-33 INFO: Sent flush_lsn feedback as: LSN(215E/D5000000, server_version=100016, timeline_id=3)
Sep 15 04:21:44 my-host pghoard[3831622]: 116.202.132.23 - - [15/Sep/2021 04:21:33] "GET /metrics HTTP/1.1" 200 -
Sep 15 04:21:44 my-host pghoard[3831622]: Compressor Thread-4 INFO: Compressed and encrypted 16777216 byte of <_io.BytesIO object at 0x7fe6a0560d10> to 7496283 bytes (45%), took: 0.145s
Sep 15 04:21:44 my-host pghoard[3831622]: TransferAgent Thread-18 INFO: Uploading memory-blob to object store: dst='cluster10/xlog/000000030000215E000000D5'
Sep 15 04:21:45 my-host pghoard[3831622]: TransferAgent Thread-18 INFO: 'UPLOAD' transfer of key: 'cluster10/xlog/000000030000215E000000D5', size: 7496283, origin: 'my-host' took 0.251s
Sep 15 04:21:45 my-host pghoard[3831622]: WALReceiver Thread-33 INFO: Sent flush_lsn feedback as: LSN(215E/D6000000, server_version=100016, timeline_id=3)
Sep 15 04:22:07 my-host pghoard[3831622]: 116.202.132.23 - - [15/Sep/2021 04:22:03] "GET /metrics HTTP/1.1" 200 -
Sep 15 04:22:07 my-host pghoard[3831622]: Compressor Thread-8 INFO: Compressed and encrypted 16777216 byte of <_io.BytesIO object at 0x7fe6a0560cc0> to 6844644 bytes (41%), took: 0.113s
Sep 15 04:22:07 my-host pghoard[3831622]: TransferAgent Thread-18 INFO: Uploading memory-blob to object store: dst='cluster10/xlog/000000030000215E000000D6'
Sep 15 04:22:07 my-host pghoard[3831622]: TransferAgent Thread-18 INFO: 'UPLOAD' transfer of key: 'cluster10/xlog/000000030000215E000000D6', size: 6844644, origin: 'my-host' took 0.243s
Sep 15 04:22:07 my-host pghoard[3831622]: WALReceiver Thread-33 INFO: Sent flush_lsn feedback as: LSN(215E/D7000000, server_version=100016, timeline_id=3)
Sep 15 04:22:21 my-host pghoard[3831622]: Compressor Thread-4 INFO: Compressed and encrypted 16777216 byte of <_io.BytesIO object at 0x7fe6a0799270> to 7015793 bytes (42%), took: 0.108s
Sep 15 04:22:21 my-host pghoard[3831622]: TransferAgent Thread-19 INFO: Uploading memory-blob to object store: dst='cluster10/xlog/000000030000215E000000D7'
Sep 15 04:22:22 my-host pghoard[3831622]: TransferAgent Thread-19 INFO: 'UPLOAD' transfer of key: 'cluster10/xlog/000000030000215E000000D7', size: 7015793, origin: 'my-host' took 0.255s
Sep 15 04:22:22 my-host pghoard[3831622]: WALReceiver Thread-33 INFO: Sent flush_lsn feedback as: LSN(215E/D8000000, server_version=100016, timeline_id=3)
Sep 15 04:22:33 my-host pghoard[3831622]: 116.202.132.23 - - [15/Sep/2021 04:22:33] "GET /metrics HTTP/1.1" 200 -
Sep 15 04:22:33 my-host pghoard[3831622]: Compressor Thread-4 INFO: Compressed and encrypted 16777216 byte of <_io.BytesIO object at 0x7fe6a0560950> to 7325113 bytes (44%), took: 0.116s
Sep 15 04:22:33 my-host pghoard[3831622]: TransferAgent Thread-19 INFO: Uploading memory-blob to object store: dst='cluster10/xlog/000000030000215E000000D8'
Sep 15 04:22:33 my-host pghoard[3831622]: TransferAgent Thread-19 INFO: 'UPLOAD' transfer of key: 'cluster10/xlog/000000030000215E000000D8', size: 7325113, origin: 'my-host' took 0.250s
Sep 15 04:22:34 my-host pghoard[3831622]: WALReceiver Thread-33 INFO: Sent flush_lsn feedback as: LSN(215E/D9000000, server_version=100016, timeline_id=3)

jason-adnuntius avatar Sep 15 '21 03:09 jason-adnuntius

@rdunklau see above my experience of testing this feature, hopefully its possible to resolve the issues?

Thanks again for the effort to provide an alternative, im keen to re-test when required

jason-adnuntius avatar Sep 15 '21 03:09 jason-adnuntius

Thank you for the detailed report, I'll comb through it and get back to you.

rdunklau avatar Sep 15 '21 06:09 rdunklau

Which PG Version are you using by the way ?

rdunklau avatar Sep 15 '21 08:09 rdunklau

We are on 10

jason-adnuntius avatar Sep 15 '21 11:09 jason-adnuntius

I haven't been able to reproduce exactly the issue you were having, but putting a huge wal load on the server allowed me to find a few bugs to fix:

  • when some transfers were completed out of order, we could encounter an exception
  • ... which prompted me to add a way to properly detect errors in the underlying thread and abort the backup
  • added a way to use a replication slot, and change the order in which we perform operations. The idea here is that we create a replication slot with RESERVE_WAL synchronously, and only after that do we launch the walreceiver. This should guard against any kind of "lost first wal" issue, even though I couldn't reproduce that case.

Once again, thank you for testing this !

rdunklau avatar Sep 15 '21 14:09 rdunklau

My exact setup is as follows:

I am running pghoard against a standby instance. The replication user is not a superuser. I have created this user and granted them access to

GRANT EXECUTE ON FUNCTION pg_start_backup(text, boolean, boolean), pg_stop_backup(boolean, boolean), pg_switch_wal(), pg_is_in_backup(), pg_is_in_recovery() TO the_replication_user;

I note you do not delete the new physical replication slot afterwards? Also you do not make use of the replication slot when calling the pg_basebackup, is that intentional?

I use replication slots, but not physical ones, so this is going to complicate matters for me as well. Anyway I will do my best to setup a test with a physical replication slot.

Im not sure if you are aware, but the comment at the start of run_piped_basebackup about not being able to figure out the start wal segment is completely inaccurate.

I contribute a change (under my personal pellcorp github account) to actually grab the backup_label with the actual start wal segment from the tar ball as its being streamed.

Refer to this pull request: https://github.com/aiven/pghoard/pull/326

So your code to extract the start wal segment is being overridden by the wal segment in the actual backup file, which is of course what is going to be used by the restore process.

Seems like perhaps we should get rid of that whole section from the run_piped_basebackup to get the start_wal_segment, as the code I contributed gets the actual start wal segment for the base backup.

I should have done that with my original contribution :-(

Anyway, I am testing now

jason-adnuntius avatar Sep 15 '21 22:09 jason-adnuntius

Initially Ive just tried to test this without enabling the replication slot, and I had one success earlier in the day, and now a failure just like yesterday.

I will do another test with a replication slot, but I really don't see how that is going to improve matters.

Unfortunately even if this does solve the issue, im not convinced its a viable solution for us, as we would have to make sure the replication slot is removed after the base backup, otherwise we will run into over-retention of WALs, which is not a problem at the moment.

jason-adnuntius avatar Sep 16 '21 00:09 jason-adnuntius

Hi @rdunklau ,

Even with replication slots enabled, I am still missing some wal segments

So the start wal segment is: 000000030000217100000096

But the first segment that gets saved to storage is 0000000300002171000000A0. so I think this basically means I am missing 4 wal files!

Sep 16 02:29:09 my-host pghoard[4002469]: pghoard MainThread INFO: Creating a new basebackup for 'cluster10' due to request
Sep 16 02:29:09 my-host pghoard[4002469]: WALReceiver Thread-32 INFO: WALReceiver initialized with replication_slot: 'test_pghoard', last_flushed_lsn: LSN(2171/A0000000, se>
Sep 16 02:29:09 my-host pghoard[4002469]: PGBaseBackup Thread-32 INFO: Starting walreceiver...
Sep 16 02:29:09 my-host pghoard[4002469]: PGBaseBackup Thread-32 INFO: Started: ['/usr/lib/postgresql/10/bin/pg_basebackup', '--format', 'tar', '--label', 'pghoard_base_backup'>
Sep 16 02:29:09 my-host pghoard[4002469]: WALReceiver Thread-33 INFO: Replication slot test_pghoard already exists
Sep 16 02:29:09 my-host pghoard[4002469]: WALReceiver Thread-33 INFO: Starting replication from '2171/A0000000', timeline: 3 with slot: 'test_pghoard'
Sep 16 02:29:09 my-host pghoard[4002469]: Compressor Thread-10 INFO: Stored and encrypted 91 byte of <_io.BytesIO object at 0x7f0cc01c1270> to 373 bytes, took: 0.014s
Sep 16 02:29:09 my-host pghoard[4002469]: Compressor Thread-11 INFO: Stored and encrypted 0 byte of <_io.BytesIO object at 0x7f0cc01c14a0> to 305 bytes, took: 0.014s
Sep 16 02:29:09 my-host pghoard[4002469]: TransferAgent Thread-25 INFO: Uploading memory-blob to object store: dst='cluster10/timeline/00000002.history'
Sep 16 02:29:09 my-host pghoard[4002469]: TransferAgent Thread-22 INFO: Uploading memory-blob to object store: dst='cluster10/timeline/00000003.history'
Sep 16 02:29:09 my-host pghoard[4002469]: TransferAgent Thread-22 INFO: 'UPLOAD' transfer of key: 'cluster10/timeline/00000003.history', size: 373, origin: 'my-host' took 0.1>
Sep 16 02:29:09 my-host pghoard[4002469]: TransferAgent Thread-25 INFO: 'UPLOAD' transfer of key: 'cluster10/timeline/00000002.history', size: 305, origin: 'my-host' took 0.1>
Sep 16 02:29:21 my-host pghoard[4002469]: Compressor Thread-10 INFO: Compressed and encrypted 16777216 byte of <_io.BytesIO object at 0x7f0ca8127040> to 6319739 bytes (38%), to>
Sep 16 02:29:21 my-host pghoard[4002469]: TransferAgent Thread-17 INFO: Uploading memory-blob to object store: dst='cluster10/xlog/0000000300002171000000A0'

jason-adnuntius avatar Sep 16 '21 01:09 jason-adnuntius

I ran it again, same result missing wals Screenshot from 2021-09-16 11-25-53 Screenshot from 2021-09-16 11-25-35

jason-adnuntius avatar Sep 16 '21 01:09 jason-adnuntius

I note you do not delete the new physical replication slot afterwards? Also you do not make use of the replication slot when calling the pg_basebackup, is that intentional?

You're right, we should use a temp replication slot instead. In pg_basebackup, we don't need it as we don't use the -X stream method.

Im not sure if you are aware, but the comment at the start of run_piped_basebackup about not being able to figure out the start wal segment is completely inaccurate.

I noticed that, but not having the history I wondered if it was a fail-safe for some cases. Agreed to remove it.

Thank you for detailing your setup, I did not test under a wal-stressed standby server.

rdunklau avatar Sep 16 '21 06:09 rdunklau

Ok, I understand the problem now.

When performing a base_backup on the standby, it starts from the latest restart_point. This LSN is returned from the BASE_BACKUP replication command. What we should do is either:

  • bypass pg_basebackup and use the command ourselves which is probably the best way since we would be able to pipe directly from the connection to wherever we need
  • or open a "regular", non replication connection to get the redo_lsn from pg_control_checkpoint()

I'm in favor of 1.

rdunklau avatar Sep 16 '21 08:09 rdunklau

I was struggling to understand the problem, and then I realised that we are passing in the last_flushed_lsn which I guess is where the wal receiver starts streaming from. And this last_flushed_lsn can often be later than the start wal segment for the pg_basebackup, because we are not getting the start lsn value from the right place for a standby.

So if we could get the correct start_lsn, it would not matter that the wal receiver was started at the same time as the pg base backup, its going to stream the correct wal files?

So option 2 sounds easier? But im guessing option 1 is the more elegant solution. Is this something you are proposing to work on, because im not confident I could do it, I don't understand the internals of the code well enough.

However I certainly can test the feature!

Thanks again for your time

jason-adnuntius avatar Sep 16 '21 10:09 jason-adnuntius

Hi @rdunklau,

What are your plans for this PR?

jason-adnuntius avatar Sep 20 '21 01:09 jason-adnuntius

Im happy for this to be closed. I've closed my original PR as well. I plan to migrate to local-tar for most of my backups in the near future (fingers crossed)

jason-adnuntius avatar Jul 14 '22 23:07 jason-adnuntius