linkding
linkding copied to clipboard
Lost data, SQL database rolled back in time
Hello,
I use this tool since a while and I'm happy with it.
Recently I ran into a strange issue when editing a bookmark which seems to cause a roll back of the SQL database.
So now it looks like my database in the same as three months ago. This is a disaster given that I don't have a backup of my docker data.
Is there anything I can do? I tried to reboot and rebuild the image with no avail.
Sorry for your loss. Take this as a learning lesson to back everything up
Sorry for your loss. Take this as a learning lesson to back everything up
I'm aware that I should have made a backup of my data, thanks. What I wanted to understand I why / how the SQL database went to what seems like a previous version of it.
@onjour I'd like to know that too. Someone on Reddit had similar issues
Yeah, I'm a bit worried about this as well after reading this issue and the Reddit post.
Can you provide more information @onjour ? Like what version are you at, if you're running it from Docker, what storage are you using, etc?
I think SQLite shouldn't break so easily, but who knows. Going back in time sounds like transactions weren't being saved on disk, only on memory, but I don't think this should happen for what I understand.
Sorry to hear that, but I can't see what I can do from my side. I can only state that I think it's very unlikely to be an application issue. linkding does not contain any functionality that could explain rolling back a database for several months in time. It only uses the Django (the web framework linkding uses) database API, which in turn uses Sqlite under the hood. Both are rock solid software libraries.
I'll not speculate further what might have happened instead or try to give advice, as I'm honestly not an expert on this. I would have offered to check the Docker logs, but with recreating the container those should be gone. As I don't see anything actionable here, I'll close the issue for now.
So, I kind of accidentally broke my database by making 30 API PATCH calls in parallel, instead of sequentially. I ended up getting a bunch of lock errors and had the same tag being duplicated 4 times on the tags table.
For what I've read, SQlite doesn't handle concurrent writes so well. This can be worked around by increasing the timeout on the Django settings, so each write will wait a bit longer instead of returning the lock error.
I don't know much about SQLite, but it seems that it uses Write-Ahead Logging (WAL) that stores changes in a temporary file, and merges it into the DB from time to time. That could explain the DB rolling back in time, in case that file got lost or not merged for some reason. But it's just a guess, difficult to know what happened without more detailed information.
Another more likely explanation to what happened:
SQLite uses POSIX advisory locks to implement locking on Unix. On Windows it uses the LockFile(), LockFileEx(), and UnlockFile() system calls. SQLite assumes that these system calls all work as advertised. If that is not the case, then database corruption can result. One should note that POSIX advisory locking is known to be buggy or even unimplemented on many NFS implementations (including recent versions of Mac OS X) and that there are reports of locking problems for network filesystems under Windows. Your best defense is to not use SQLite for files on a network filesystem.
That seems like something that could more easily happen on a VPS / Cloud provider.