basic-to-passport-auth-http-proxy
basic-to-passport-auth-http-proxy copied to clipboard
Destination corrupted - Restore Only
First of all: thank you for this proxy. I started using it 3 months ago on my DS220+ with DSM 7.0.1. Subscribed to Microsoft 365 Family because of it. Worked fine for almost 3 months, until 3 days ago (January 9th).
Hyper Backup says the destination is corrupted. It only lists a restore option, no option to take a backup. Hyper Backup gives a 'log', but that only says:
Backup task was unable to run due to errors found at the backup destination. The following files were found broken in the latest integrity check and cannot be restored. There may be other broken files which were not detected this time. If you have further questions, please contact Synology support. Broken file(s): [...]
The log from the Docker container lists nothing special either. 2 most recent lines:
2022-01-12T18:31:27.350Z proxy:info Proxy server listening: { address: '::', family: 'IPv6', port: 3000 } 2021-12-08T16:27:49.517Z proxy:info Proxy server listening: { address: '::', family: 'IPv6', port: 3000 }
Can I find other relevant information/logs?
DS220+ running DSM 7.0.1-42218 and Hyper Backup 3.0.2-2432. Hyper Backup has just been updated, so the error occurred with a previous version.
I backed up approximately 700GB with it. Took 5 days 24/7 uploading, so just a 'quick' delete and try again isn't a viable option...
Thanks in advance!
The size of the database file of the backup might have hit an upload file size limit of the OneDrive WebDAV API. Unfortunately this is a limitation of this proxy that cannot be fixed. Have a look at this issue: https://github.com/skleeschulte/basic-to-passport-auth-http-proxy/issues/11 There is also an alternative software mentioned.
Got the same issue while using the integrity check on a 20TB+ backup to sharepoint. The backup went fine though and didn't generate such event (less ressource-intensive, I guess). Maybe some warning could be added on the README.md regarding the impossibility to run those checks? In my case event log shows this kind of errors (redacted and summarized):
transfer_webdav.cpp:326 need retry #0: recvFile failed: -300, Server error
cloudstorage/protocol/webdav/webdav-protocol.cpp(134): Failed to downlaod file msg = 'transfer closed with 5088 bytes remaining to read'
[27663]guard_action.cpp:936 size doesn't match[/volume1/@img_bkp_cache/webdav_SynoHyperBackup_.TT3XMj/XXX.hbk/Pool/0/77/867.index], db_record[52320], file size[0]
[27663]guard_action.cpp:1020 failed to check file in db[/volume1/@img_bkp_cache/webdav_SynoHyperBackup_.TT3XMj/XXX.hbk/Guard/cloud/9_bucket.db]
Similar issue just happened to me. It seems that this can happen for different reasons, but the backup almost always stops working after integrity check. It is not to say that the integrity check causes it, but if there is anything wrong found during the check, Hyper Backup will prevent you from using the task for backup without offering any course of action to remedy the situation. I have seen these reasons:
- as mentioned in #11 the OneDrive size limitation for a single object
- a destination file not being written for one reason or another and having 0 size. Not sure that anything can be done here.
- duplicate file names in destination - somehow synology is able to save the same file twice. But this seems to only happen with google drive. From what I've read, people were able to delete the older duplicates from destination and resume backup, but proceed at your own risk.
- my problem, where the file sizes did not match but they were all non zero.
There are at least three logs you can look at:
/var/log/messagescan be filtered for relevant informationcat /var/log/messages | grep 'Synology img_' | more/var/log/synolog/synobackup.logthis one is the same thing you can see through Hyper Backup UI- and if your issue was indeed detected by integrity check, the most useful log is
/volume1/@img_bkp_cache/<task_name>.<UID>/<backup_name>.hbk/Guard/detect/error.log. This actually has the information relevant to why the integrity check failed. Probably the easiest way to find the log isfind /volume1/@img_bkp_cache -name "error.log"
In my case the log would show this
File size not match[370400][400608], file[Pool/0/0/234.index] on cloud target
File size not match[422784][465856], file[Pool/0/0/235.index] on cloud target
File size not match[150528][190816], file[Pool/0/0/236.index] on cloud target
File size not match[207296][418400], file[Pool/0/0/238.index] on cloud target
File size not match[408896][150528], file[Pool/0/0/239.index] on cloud target
File size not match[190816][421920], file[Pool/0/0/240.index] on cloud target
File size not match[418400][370400], file[Pool/0/0/241.index] on cloud target
File size not match[465856][86944], file[Pool/0/0/242.index] on cloud target
File size not match[86944][429728], file[Pool/0/0/243.index] on cloud target
File size not match[121728][52429872], file[Pool/0/0/245.index] on cloud target
File size not match[400608][391072], file[Pool/0/0/246.index] on cloud target
File size not match[391072][52429448], file[Pool/0/0/247.index] on cloud target
File size not match[429728][121728], file[Pool/0/0/248.index] on cloud target
File size not match[421920][450432], file[Pool/0/0/249.index] on cloud target
File size not match[450432][52431908], file[Pool/0/0/250.index] on cloud target
File size not match[30983812][52432308], file[Pool/0/0/234.bucket] on cloud target
File size not match[52429448][52434224], file[Pool/0/0/235.bucket] on cloud target
File size not match[10747064][25149156], file[Pool/0/0/236.bucket] on cloud target
File size not match[52429872][10747064], file[Pool/0/0/239.bucket] on cloud target
File size not match[25149156][52428940], file[Pool/0/0/240.bucket] on cloud target
File size not match[52431908][30983812], file[Pool/0/0/241.bucket] on cloud target
File size not match[52434224][15231916], file[Pool/0/0/242.bucket] on cloud target
File size not match[15231916][52428924], file[Pool/0/0/243.bucket] on cloud target
File size not match[8583592][422784], file[Pool/0/0/245.bucket] on cloud target
File size not match[52432308][35103164], file[Pool/0/0/246.bucket] on cloud target
File size not match[35103164][207296], file[Pool/0/0/247.bucket] on cloud target
File size not match[52428924][8583592], file[Pool/0/0/248.bucket] on cloud target
File size not match[52428940][49695912], file[Pool/0/0/249.bucket] on cloud target
In my opinion it is very suspicious that expected and actual sizes seems to be mixed up. For example expected size for 249.bucket matches actual size of 240.bucket and so on. They all have perfect matches when it comes to size. My best guess is that somehow the index database got out of sync.
In this specific situation, I was able to delete my task and re-link it, which has solved my problem. I was able to run a successful backup and integrity test afterwards. Just a word of advice: before you delete your task, take screenshots of your settings, because for some reason Hyper Backup is not able to figure that out from the backup itself.
Anyway, I know this is not really relevant to the passport proxy, nor it might help everybody. But hopefully it at least provides some information to aid troubleshooting your problem.