Przemyslaw Skibinski
Przemyslaw Skibinski
Thanks for answer. I think that for now you should return an error when compressed data is bigger than 2^19. Corrupted data is a huge problem in comparison to a...
Your patch works but I wonder now if zling has more problems.
Good to know, thanks Evan.
lizard v1.0 is lz5 v2.0 + bug fixes
Thanks for information. I fixed it at https://github.com/inikep/lizard/commit/02c35c25e565
Thanks for reporting. I tried to reproduce your issue with the latest Lizard 1.0 at https://github.com/inikep/lizard/commit/02491c71c2e6fd5c10997404df2f18d0fc7afadb. I used `gcc-8` with UBSan and ASan and it found no issues. Please try...
There is an example program to demonstrate the basic usage of the compress/decompress functions at https://github.com/inikep/lizard/blob/lizard/examples/simple_buffer.c The simplest API is: ``` LIZARDLIB_API int Lizard_compress (const char* src, char* dst, int...
This is not possible to get decompressed bound from compressed size. But I have 2 ideas: - use `Lizard_decompress_safe_continue()` to decompress data in parts with a fixed size buffer (look...
These changes will not be correct in 8.0.23 after https://github.com/percona/percona-server/pull/4136 will be merged.
`rocksdb_enable_pipelined_write` is `OFF` by default Sorry it was my mistake.