visidata
visidata copied to clipboard
[main-] fix rare duplication of replay cmds
When replaying command files, in rare circumstances, some commands can be run twice.
The cause is that reload()
is called twice for the command file. Once inside eval_vd()
, and again after eval_vd()
finishes: https://github.com/saulpw/visidata/blob/3ad7a3d0c1475ff53bf31481506b9a748b48ac7c/visidata/main.py#L49 https://github.com/saulpw/visidata/blob/3ad7a3d0c1475ff53bf31481506b9a748b48ac7c/visidata/main.py#L352
Both reload()
calls can run at the same time in two different threads, because TableSheet.reload()
has the @asyncthread
decorator. Both threads add to self.rows
via self.addRow(r)
in TableSheet.loader()
, so command rows can get added twice.
The specific events that must happen to trigger the bug are:
- the first
reload()
call must runTableSheet.loader()
, past the line ofself.rows = []
https://github.com/saulpw/visidata/blob/3ad7a3d0c1475ff53bf31481506b9a748b48ac7c/visidata/sheets.py#L282 - before that
reload()
finishes adding rows, the secondreload()
call must runTableSheet.loader()
, past the line ofself.rows = []
- before the second
reload()
call finishes adding rows, the first call continues adding rows
I made a pathological loader that triggers these conditions every time, by loading only two command rows per second : 5c97a847642f3563f95bdefa21a0a94874a676f0. To test with that loader, do touch row_duplication.slow; vd -p dupes.slow
, and see that the dupes_vd sheet contains more than 5 commands. The loader only generates 5 commands, but in my tests, the demo code duplicates all of them, for a total of 10 commands.
The affected versions of Visidata could be multiple versions since Sep 2019, as of 330117fa328ecd8498fb76f08954bce2b328851c. But on my system, I have not yet found a way to trigger it for replay files that are short (under 100 lines). For much longer files, it does happen every time. For example, it happens if I add ~5000 lines of comments to a short .vdj
file. But for short files with no comments (~10 lines), I couldn't trigger it at all by adding CPU/IO/disk stress via the Ubuntu stress
binary, in ~500 tries.