lzbench icon indicating copy to clipboard operation
lzbench copied to clipboard

Use disk option(s)

Open mirh opened this issue 5 years ago • 1 comments

I get why everything happens in memory, but sometimes you just cannot fit your whole test file into it. Or maybe you can, but it's the combined size with the RAM requirements of the most "hungry compressors" to make it a bad day for you. Or even, you could, but the "clue" you are looking for happens below the bandwidth threshold of your non volatile memory.

So, putting aside that it I/O bottlenecks could be detected and flagged (I suppose it may also be true for RAM in some crazy situation?), could we get some kind of knob to control the "disk type"? I see three possible levels for this:

  • compression on disk (since all algorithms should be asymmetrically more tasking here), everything else as usual, meaning the ending archive gets copied back in memory before decompression
  • an intermediate mode where once a single file/element/part is decompressed, it may as well be deleted on the spot (making best case RAM requirement be: algorithm footprint + archive size + sector size)
  • everything happens on disk

mirh avatar Jul 30 '20 00:07 mirh

Please try -m option which splits large files into parts. You can also compress a large file in blocks/chunks with -b option.

inikep avatar Sep 01 '20 07:09 inikep