perf: Occasionally sort database file by frequency counts
Currently, we seek to a target entry's row and update the count / access time in-place.
If many entries are accessed, this can "fragment" the database, making the sort no longer be roughly one-pass O(n) and instead turn into O(n log n). (Tim sort's best-case complexity on partially sorted data is O(n).)
Possible optimization: Every once in a while, "defragment" the database with correctly sorted entries. This means that future frece prints (which read/sort everything) will be faster since the data will be already mostly sorted.
Downsides: increases code complexity, and I don't think it will benefit most users (including me). One alternative is running a manual frece print command to just write a new sorted database. :)