Nick Terrell
Nick Terrell
This is definitely a bug in `v0.7.3`. Though I don't quite know whats going on yet. I will look into it a little bit more, then try just skipping `v0.7.3`...
On versions 0.7.3 - 0.8.0 (and possibly higher, I haven't tested yet), this line is failing: https://github.com/facebook/zstd/blob/75ed1a815ef9695a2ba5afeb7f8e3f06d9c732a6/programs/fileio.c#L329 The error is: `Operation not authorized at current processing stage` E.g. https://github.com/facebook/zstd/actions/runs/3752019678/jobs/6373711031
well it is randomly passing again, so I guess I'll close this PR
@nh2 If the memory usage of pzstd is too high, then you can always use zstd to decompress it, they are fully compatible. Zstd decompression is already very fast, and...
Please re-open if you have further questions.
Thanks for bringing up the edge case, I'll fix it shortly!
The value for `k` can vary quite a bit based on the data. I'd recommend using `ZDICT_optimizeTrainFromBuffer_cover()`, if it is works on your data, so it is automatically selected for...
The new dictionary builder only considers samples which are >= 8 bytes in its analysis, so it will produce a nearly-empty dictionary. The final step fails if the dictionary is...
I'm not going to update fastcover to handle samples < 8 bytes. It will only make the performance of the algorithm (slightly) worse on non-degenerate data. Zstd doesn't work terribly...
The zstd format provides the dictionary ID in the frame. When you don't use a dictionary, the ID is 0. So to detect if compression used a dictionary or not,...