Segmentation fault
Looks to be an error related to memory management
total used free shared buff/cache available
Mem: 15Gi 4.2Gi 3.2Gi 8.0Mi 8.2Gi 9.8Gi
Swap: 4.0Gi 419Mi 3.6Gi
parpar04 -s2867200B -r5% --min-recovery-slices=5% -dpow2 -m0 -o test.par2 'mydir' -O
Input data: 85.15 GiB (32062 slices from 349 files)
Recovery data: 4385.94 MiB (1604 * 2800 KiB slices)
Input pass(es): 1, processing 1604 * 2800 KiB chunks per pass
Slice memory usage: 4451.56 MiB (1604 recovery + 24 processing chunks)
Read buffer size: 2800 KiB * max 8 buffers
Multiply method: Xor-Jit (SSE2) with 63.75 KiB loop tiling, 8 threads
Input batching: 12 chunks, 2 batches
Segmentation fault
parpar04 -s3584000B -r5% --min-recovery-slices=5% -dpow2 -m0 -o test.par2 'mydir' -O
Input data: 85.15 GiB (25789 slices from 349 files)
Recovery data: 4409.18 MiB (1290 * 3500 KiB slices)
Input pass(es): 1, processing 1290 * 3500 KiB chunks per pass
Slice memory usage: 4491.21 MiB (1290 recovery + 24 processing chunks)
Read buffer size: 3500 KiB * max 8 buffers
Multiply method: Xor-Jit (SSE2) with 63.75 KiB loop tiling, 8 threads
Input batching: 12 chunks, 2 batches
Segmentation fault
increasing the slice size by +1 article size (716800B) allows processing, I'm guessing because it cuts the memory usage in half. parpar04 -s4300800B -r5% --min-recovery-slices=5% -dpow2 -m0 -o test.par2 'mydir' -O
Input data: 85.15 GiB (21259 slices from 349 files)
Recovery data: 4359.96 MiB (1063 * 4200 KiB slices)
Input pass(es): 2, processing 1063 * 2100 KiB chunks per pass
Slice memory usage: 2229.2 MiB (1063 recovery + 24 processing chunks)
Read buffer size: 4096 KiB * max 8 buffers
Multiply method: Xor-Jit (SSE2) with 63.75 KiB loop tiling, 8 threads
Input batching: 12 chunks, 2 batches
Calculating: 0.47%
If you remove m0 or set a low-ish limit (ie: 4500M) processing will continue - again, because the input passes are increased and memory used is lower
Thanks for reporting.
A limitation of the current dev code is that it reserves all the memory in a single allocation. Some memory allocators don't seem to like large allocations and fail - I'm guessing that's what happening here.
Split allocations will need to be implemented to fix the issue. For now, you'll need to keep the memory limit small enough to not break the memory allocator.