htop
htop copied to clipboard
Fix numbers larger than 100 terabytes
This is an issue similar to #733. If a process has RD_CHAR or WR_CHAR larger than 100 TB, and the columns are displayed, htop will terminate and print htop: Success.
The situation can be reproduced by running something that calls read/write a lot for a few hours, such as cat /dev/zero > /dev/null.
Can we have a solution that's more future-proof? Even if we fix the 100 TB problem now, there is still a potential of snprintf() overflow on larger integers, e.g. petabytes or exabytes (2^64 = 16 exa).
And according to the reproduce case, this should a local denial-of-service vulnerability (a local user A can attempt to crash user B's htop monitor through this).
OK, now it can deal with any RD_CHAR inside 64-bit range. It can also deal with I/O rate up to 10PB/sec now.
(There are still some theoretical situations that can make snprintf() overflow, e.g. memory larger than 10PB or jobs running more than 1140 years. Should we also deal with these extreme cases?)
Merged here: htop-dev/htop@00d333c