Unexpected number of deleted filesets 1 vs 0
Hello -
I was wondering is someone might have some suggestions for troubleshooting my backup. It's a 2.17 TB backup to Amazon Cloud Drive, and for the last couple of weeks its been throwing the error: Unexpected number of deleted filesets 1 vs 0. I have debug and verbose turned on, but I cant find what seems to be missing. The error started while I was running 2.0.0.96 but now I'm running 2.0.0.99. I'm running windows 10 64 bit, 4.5Ghz, 16GB, 256 SSD.
When I click show, it displays this in the General Logs: VerboseOutput: False VerboseErrors: False Messages: [Destination and database are synchronized, not making any changes] Warnings: [] Errors: []
And then here are the first couple of lines from the remote log: [ {"Name":"duplicati-bc797c670fb0e4352a6fd0847eae54fb6.dblock.zip.aes","LastAccess":"2016-01-27T03:46:03.782Z","LastModification":"2016-01-27T03:46:03.782Z","Size":1073656989,"IsFolder":false}, {"Name":"duplicati-i91f9137521584e5683ec16949837293a.dindex.zip.aes","LastAccess":"2016-01-27T03:46:04.798Z","LastModification":"2016-01-27T03:46:04.798Z","Size":784941,"IsFolder":false}, {"Name":"duplicati-i74633fd2d4be47428ba0e88694bc22a6.dindex.zip.aes","LastAccess":"2016-01-27T03:57:36.877Z","LastModification":"2016-01-27T03:57:36.877Z","Size":682125,"IsFolder":false}, {"Name":"duplicati-ifd90a204567d437d82fc86d769941817.dindex.zip.aes","LastAccess":"2016-01-27T04:03:17.73Z","LastModification":"2016-01-27T04:03:17.73Z","Size":784589,"IsFolder":false},
I have tried to repair, and also recreate the database but the error persists. I also tried a bug report, but the file never downloads, it just sits there doing nothing.
Thanks for any advice on troubleshooting this further.
Want to back this issue? Post a bounty on it! We accept bounties via Bountysource.
I also get this error, although not sure its the same problem as TonySturch. I updated my wife's machine (2013 Mac Air with Duplicati backup to OneDrive) to Duplicati 2.0.0.99 and I believe she closed the lid during a backup session. When I opened it, the backup gave this error. After trying to repair, then delete and repair, the backup still gives the same error as above.
I was able to get an error report. Here is a link to download it: https://onedrive.live.com/redir?resid=A60CC657F0C95379!21564&authkey=!AOm6qIDrgQQJjDY&ithint=file%2czip
Here are the main errors I get when trying to repair or run the backup:
2016-02-20 08:53: Failed while executing "Backup" with id: 1
System.Exception: Unexpected number of deleted filesets 1 vs 0
at Duplicati.Library.Main.Database.LocalDeleteDatabase+<DropFilesetsFromTable>c__Iterator0.MoveNext () in 1[TElement]..ctor (IEnumerable1 source) in
2016-02-20 08:36: Failed while executing "Repair" with id: 1
System.Exception: The file duplicati-b6cd27dc7174b4c248f7d59c7333195bb.dblock.zip was downloaded and had size 6830 but the size was expected to be 13249197
at Duplicati.Library.Main.AsyncDownloader+AsyncDownloaderEnumerator+AsyncDownloadedFile.get_TempFile () in
It also appears that the backup continues to 'work' and I can restore newly backed up files, however, they restore to a 'null' folder within the Duplicati folder.
Really appreciate the work done on Duplicati and any help that can be given.
I don't know if this is relevant, but I noticed that when I try to edit the configuration for how long to keep the backups, the settings are never 'saved'. Instead, the settings always return to 'forever', '3' and 'days'. Maybe its related to this problem?
I might have something more to add to help diagnose the problem. I had the error happen again this morning on a different backup from the one above. It happened when I had to force stop an upload that was hanging due to a server time out. The source size is now 0 bytes and continues to give the error "Unexpected number of deleted filesets 1 vs 0" even after verifying the backup, repairing, and delete and repair operations have been done on it.
What is interesting is the report generated before and after the problem. It appears as though the backup doesn't recognize the source folders anymore? I have tried adding the source again, but this doesn't fix the problem.
This was the backup that created the errors
2016-03-07 09:18: Result DeletedFiles: 4173 DeletedFolders: 1195 ModifiedFiles: 0 ExaminedFiles: 0 OpenedFiles: 0 AddedFiles: 0 SizeOfModifiedFiles: 0 SizeOfAddedFiles: 0 SizeOfExaminedFiles: 0 SizeOfOpenedFiles: 0 NotProcessedFiles: 0 AddedFolders: 0 TooLargeFiles: 0 FilesWithError: 0 ModifiedFolders: 0 ModifiedSymlinks: 0 AddedSymlinks: 0 DeletedSymlinks: 0 PartialBackup: False Dryrun: False VerboseOutput: False VerboseErrors: False Messages: [Stopping backup operation on request] Warnings: [] Errors: []
2016-03-07 09:18: Message Stopping backup operation on request
This was the last successful backup
2016-03-05 17:17: Result DeletedFiles: 0 DeletedFolders: 0 ModifiedFiles: 25 ExaminedFiles: 4173 OpenedFiles: 26 AddedFiles: 1 SizeOfModifiedFiles: 99348367 SizeOfAddedFiles: 4776806 SizeOfExaminedFiles: 2400723082 SizeOfOpenedFiles: 104356284 NotProcessedFiles: 0 AddedFolders: 1 TooLargeFiles: 0 FilesWithError: 1 ModifiedFolders: 0 ModifiedSymlinks: 0 AddedSymlinks: 0 DeletedSymlinks: 0 PartialBackup: False Dryrun: False VerboseOutput: False VerboseErrors: False Messages: [No remote filesets were deleted, Compacting not required] Warnings: [Failed to process path: /Users/THS/Library/Mobile Documents/com~apple~Numbers/Documents/ClassList.numbers/ => Too large metadata, cannot handle more than 102400 bytes] Errors: []
After deleting and repairing the database, Duplicati gives the error: "The file duplicati-b2f017bd20ed74a6382345b2309ca985f.dblock.zip was downloaded and had size 6832 but the size was expected to be 656431"
Thanks for the added info jfspan.
I believe my issues may have started when the process was interrupted as well.
The error with "forever" and "3 days" is fixed in source and will be in the next build.
Some additional fixes for Amazon Cloud drive are also included in the next build
Thanks for the update. Did you mean there were some fixes in 2.0.1.3? Or in a build which has not yet been released?
I installed build 2.0.1.3 this morning and ran a backup sucessfully, with the usual warning. I then attempted a repair, and got the same warning. Then I attempted a recreate, and that choked on 3 files and make matters worse. I can no longer do a backup, as it errors out with:
Found inconsistency in the following files while validating database: <Lists files> Run repair to fix it.
Unfortunately Repair does not work. It errors out with:
Failed to repair, after repair 1 blocklisthashes were missing
Any suggestions on a workaround for this? Id prefer not having to upload the 2.3TB again
I installed build 2.0.1.8_canary_2016-03-20 and the delete and repair appeared to work, although the backup had to upload almost all the data again (checking the logs, it looked like each file was set to timestamp of 01-01-0001). However, after it completed, I received the error: "Unexpected number of remote volumes marked as deleted. Found 0 filesets, but 101 volumes"
Running repair doesn't do anything (the log says the remote and local db are synchronized).
My source size still shows 0 bytes.
Also, I should mention that I had originally set the backups to be kept for at least 5 years so there shouldn't have been any files marked for deletion in the backup.
I also installed the same build on my wife's laptop to fix the same error and now its gives: "Abort due to constraint violation UNIQUE constraint failed: Remotevolume.Name, Remotevolume.State"
Not sure how these are related, but both gave the same "Unexpected number of deleted filesets 1 vs 0" error before.
Here is the bug report generated: bugreport 03-23-2016.zip
I also installed the same build on my wife's laptop to fix the same error and now its gives: "Abort due to constraint violation UNIQUE constraint failed: Remotevolume.Name, Remotevolume.State"
this looks like #1644
@jfspan thanks for the bug report, I will look into it. @agrajaghh Yes, I think it is the same problem.
@jfspan there appears to be a lot of errors in that rebuild of the database. For instance: "Failed to process file: duplicati-20150919T173221Z.dlist.zip", could you try to extract that file with a zip program to see if it is damaged?
I can see that the backup initially failed with a "throttled" error, meaning that the destination prevented you from sending more data.
Are you possibly having two backups that point to the same destination folder?
I use separate folders for each backup but I do have two or three separate backups (all managed by Duplicati) from the same machine. I break down various data groups (Documents that change more often separately from Photos and Music) and have them backup on different schedules, but each to their own folder.
I was able to open the duplicati-20150919T173221Z.dlist.zip file and it appeared to be intact. The filelsit.json was readable and listed files with hashes, etc.
Yeah, I was puzzled by the throttled error except that maybe it was being accessed by another app at the same time.
One more thing to add (hopefully helpful for debugging), when I tried to add another backup (using canary 2.0.1.8 build) on the same Macbook Air to test settings and see if another setting would work, duplicati read an extremely large dataset for what should have been a small amount of data. I selected about 11GB of data, but duplicati began reading 212GB to backup (there's only 128GB of disk space). I stopped the backup (not forced stopped) and checked the settings, tried to select a smaller batch in the configuration, but when I started again, Duplicati then read 17TB of data to backup.
Also, just wanted to again say, thank you for your time and help since I know you're doing this very part time without compensation. We appreciate Duplicati since it is much better than the paid programs I've used previously (in every aspect).
I think the error with reading files in your setup is the same as in #1652. That issue has now been fixed.
There is a new canary build that should fix most of the issues mentioned here (except the "unexpected number of remote filesets" error, which I am still trying to figure out): https://github.com/duplicati/duplicati/releases
@TonySturch and @jfspan I think I finally found the cause of the "unexpected number of remote filesets" message.
In the bug report from @jfspan I found the volume "duplicati-20160220T012347Z.dlist.zip" is only 323 bytes. Not sure why it was created, but it contains no files, and this confuses the delete procedure and it aborts with the mentioned message.
I will make a workaround for this issue, but it would be nice to know why it happens.
@TonySturch: Could you try to recreate the database with the latest canary build? It has some additional fixes for recreating the database that will hopefully fix your issue.
Thank you kindly, Sir. I have downloaded the latest binaries and an running a rebuild right now. May take quite some time to either complete or fail.
The recreate has been running for nearly a day. It was very quick to reach about 85% to 90% complete as per the mercury bar, but has since stalled out. The process is still using about 40% to 50% CPU, so it appears to be doing something. Ill leave it going for a couple day sand see if it makes and progress. The previous version would have likely failed by this point.
It could be because it is looking for additional data that it could not find. In that case it will be busy downloading and checking files to see if the missing data is located.
I suppose that would make sense, though I have not yet seen it use much network data. Its currently using about 65% CPU (quad core 4.6Ghz) and about 15M/s disk usage to the system disk, not the volume with the archive being backed up. Doesn't look like it's moved a single pixel since this morning. Should I just sit tight and let it do it's thing?
In that case it is probably some SQL query that is not well designed and causes slowdowns during the repair. The recreate queries are the next thing @FootStark will work on.
I found this error this morning: The OAuth service is currently over quota, try again in a few hours
Thank you so much for all the help. Please, I really hope you don't take my postings an my being impatient. I'm only trying to help by providing any errors I came across. If there's anything I can do to help in any way, please let me know. Thank you again.
Hi @TonySturch, I had the OAuth Quota message with the GDrive backend as well this morning, which finally caused my Backup to fail.
@TonySturch and @deHoeninger: The OAuth service costs some money to run, and it had hit the daily cap I put on it, after a surge of new users came with the announcement of the new experimental release.
I have doubled the daily spending cap, so your backups should be running without this error again.
I have not yet figure out how to get a notification when the daily cap is hit or nearing, so feel free to drop me an email if you see the quota message again.
I have been running the backup again over the last couple days, and it finally errored out today:
Unknown header: 2632630406
If I hit show, here's what I see in the logs:
Failed to process index file: duplicati-ie6495f3b84a94962baa8b65037933fb3.dindex.zip.aes Newtonsoft.Json.JsonReaderException: After parsing a value an unexpected character was encountered: :. Path 'blocks[458].hash', line 1, position 32198. at Newtonsoft.Json.JsonTextReader.ParsePostValue() at Newtonsoft.Json.JsonTextReader.ReadInternal() at Newtonsoft.Json.JsonTextReader.Read() at Duplicati.Library.Main.Volumes.IndexVolumeReader.IndexBlockVolumeEnumerable.IndexBlockVolumeEnumerator.BlockEnumerable.BlockEnumerator.MoveNext() at Duplicati.Library.Main.Volumes.IndexVolumeReader.IndexBlockVolumeEnumerable.IndexBlockVolumeEnumerator.BlockEnumerable.BlockEnumerator.ReadVolumeProps() at Duplicati.Library.Main.Volumes.IndexVolumeReader.IndexBlockVolumeEnumerable.IndexBlockVolumeEnumerator.BlockEnumerable.BlockEnumerator.Dispose() at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
Failed to process index file: duplicati-i47417beddfda45b6b9912765962fc725.dindex.zip.aes Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: T. Path 'blocks[2903].hash', line 1, position 203225. at Newtonsoft.Json.JsonTextReader.ParseValue() at Newtonsoft.Json.JsonTextReader.Read() at Duplicati.Library.Main.Volumes.IndexVolumeReader.IndexBlockVolumeEnumerable.IndexBlockVolumeEnumerator.BlockEnumerable.BlockEnumerator.MoveNext() at Duplicati.Library.Main.Volumes.IndexVolumeReader.IndexBlockVolumeEnumerable.IndexBlockVolumeEnumerator.BlockEnumerable.BlockEnumerator.ReadVolumeProps() at Duplicati.Library.Main.Volumes.IndexVolumeReader.IndexBlockVolumeEnumerable.IndexBlockVolumeEnumerator.BlockEnumerable.BlockEnumerator.Dispose() at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun(LocalDatabase dbparent, Boolean updating, IFilter filter, NumberedFilterFilelistDelegate filelistfilter, BlockVolumePostProcessor blockprocessor)
Is there anything useful in there? I couldn't actually find the files they were talking about in my Amazon cloud drive. Should I simply delete the whole thing and start from scratch?
Also strange is that Duplicati continues to use 30%-40% CPU after the job has ended. Whats even more strange is that after right clicking and exiting, the icon disappears but it continues to run, all the while using 30% or more CPU.
I am still getting errors on database repairs and the backup still doesn't appear to be completely working. Whenever I run delete and repair, there is still the error "The file duplicati-b0a3381d92b7f48edaa3a254bb1dbed4c.dblock.zip was downloaded and had size 6832 but the size was expected to be 52331007"
The error given when I run the backup is now, "The database was attempted repaired, but the repair did not complete. This database may be incomplete and the backup process cannot continue. You may delete the local database and attempt to repair it again." (Then repeat the error given above).
However, when I try to duplicate the backup setting for a new backup, the source size jumps astronomically from what should be about 25–35GB to well over 90GB. Its as though the filters are not working correctly or being misapplied in some way. Not sure if these are connected. I have a similar problem on my wife's computer with file sizes. On hers, when I recreate a backup that isn't working, the source jumps from about 40GB to over 190GB.
Is it possible to 'roll back' to a previous backup point, and restart the backup process from there? I only have about 80-90GB of data (not quite the 2.3TB of TonySturch above) if I were to start over (and would like to avoid it if possible), but if would be great to know how to best fix this to go forward.
Windows 10 64bit, having similar problem. unable to perform any backup
Tried running it as administrator with full permission but nothing changed
System.Exception: Unexpected number of remote volumes marked as deleted. Found 0 filesets, but 2 volumes at Duplicati.Library.Main.Database.LocalDeleteDatabase.<DropFilesetsFromTable>c__Iterator0.MoveNext() at Duplicati.Library.Main.Operation.DeleteHandler.DoRun(LocalDeleteDatabase db, IDbTransaction transaction, Boolean hasVerifiedBacked, Boolean forceCompact) at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter) at Duplicati.Library.Main.Controller.<Backup>c__AnonStorey0.<>m__0(BackupResults result) at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, Action1 method)
at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)`
System.IO.IOException: The process cannot access the file because it is being used by another process. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.File.InternalMove(String sourceFileName, String destFileName, Boolean checkHost) at Duplicati.Library.Main.Operation.RepairHandler.Run(IFilter filter) at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, Action1 method)
at Duplicati.Library.Main.Controller.Repair(IFilter filter)
at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)`
System.Data.SQLite.SQLiteException (0x80004005): disk I/O error disk I/O error at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt) at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt) at System.Data.SQLite.SQLiteDataReader.NextResult() at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave) at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior) at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery(CommandBehavior behavior) at Duplicati.Library.SQLiteHelper.DatabaseUpgrader.UpgradeDatabase(IDbConnection connection, String sourcefile, String schema, IList1 versions)
at Duplicati.Library.SQLiteHelper.DatabaseUpgrader.UpgradeDatabase(IDbConnection connection, String sourcefile, Type eltype)
at Duplicati.Library.Main.Database.LocalDatabase.CreateConnection(String path)
at Duplicati.Library.Main.Database.LocalBackupDatabase..ctor(String path, Options options)
at Duplicati.Library.Main.Operation.BackupHandler.Run(String[] sources, IFilter filter)
at Duplicati.Library.Main.Controller.<Backup>c__AnonStorey0.<>m__0(BackupResults result)
at Duplicati.Library.Main.Controller.RunAction[T](T result, String[]& paths, Action1 method) at Duplicati.Library.Main.Controller.Backup(String[] inputsources, IFilter filter) at Duplicati.Server.Runner.Run(IRunnerData data, Boolean fromQueue)
I've been having a similar (or the same) issue on my Arch Linux machine using ACD. I've tried repairing the database but I'm still receiving these error messages on every backup:
Unexpected number of deleted filesets 18 vs 19
Feb 14, 2018 1:01 AM: Message
Re-creating missing index file for duplicati-bcf55e7939a9d404287537f5f92a996fd.dblock.zip.aes
Feb 14, 2018 1:01 AM: Message
Expected there to be a temporary fileset for synthetic filelist (123, duplicati-b10312ffaf6a74f79a238ffef3dcded11.dblock.zip.aes), but none was found?
Were we ever able to figure out the cause or how to fix it?