inotify-rs icon indicating copy to clipboard operation
inotify-rs copied to clipboard

Address watch descriptor reuse

Open hannobraun opened this issue 6 years ago • 2 comments

From the inotify man page:

When a watch descriptor is removed by calling inotify_rm_watch(2) (or because a watch file is deleted or the filesystem that contains it is unmounted), any pending unread events for that watch descriptor remain available to read. As watch descriptors are subsequently allocated with inotify_add_watch(2), the kernel cycles through the range of possible watch descriptors (0 to INT_MAX) incrementally. When allocating a free watch descriptor, no check is made to see whether that watch descriptor number has any pending unread events in the inotify queue. Thus, it can happen that a watch descriptor is reallocated even when pending unread events exist for a previous incarnation of that watch descriptor number, with the result that the application might then read those events and interpret them as belonging to the file associated with the newly recycled watch descriptor. In practice, the likelihood of hitting this bug may be extremely low, since it requires that an application cycle through INT_MAX watch descriptors, release a watch descriptor while leaving unread events for that watch descriptor in the queue, and then recycle that watch descriptor. For this reason, and because there have been no reports of the bug occurring in real-world applications, as of Linux 3.15, no kernel changes have yet been made to eliminate this possible bug.

I think a solution to this problem would be to force the user to read the full event queue before adding another watch. In principle, it should be relatively straight-forward to enforce this at compile-time, but that would require us to rethink Inotify's API.

hannobraun avatar Aug 20 '17 09:08 hannobraun

Thinking in terms of implementing futures (see #49), is it possible to continuously poll for watch events in a separate thread and return them via a future?

arsalan86 avatar Aug 23 '17 18:08 arsalan86

Certainly possible, but I don't think it would be an advantage. Even if you polled in short intervals, this issue would still be present (although less likely, maybe). You'd need some kind of synchronization between the main thread and the polling thread to fully solve the problem, and at that point this seems like way more complexity than is warranted. In addition, I don't think it's possible to prevent this issue at compile-time, if multiple threads are involved.

Generally speaking, I can't imagine circumstances where I would accept a solution to a problem that would require additional threads. That kind of thing would be way too heavyweight for what is supposed to be a simple wrapper.

hannobraun avatar Aug 24 '17 09:08 hannobraun