Guillaume
Guillaume
Great ! I'm able to reproduce now. Thank you !
@191919 it now works here with the latest dev branch : ``` build/benchmark -1 density_01263.bin Single threaded in-memory benchmark powered by Density 0.15.0 Copyright (C) 2015 Guillaume Voirin Built for...
@191919 thanks, yes I can understand `calloc` is still using a lot of CPU time as when reaching the big dictionary stage, clearing is still needed, but it is now...
> My dataset is mixed with text and binary data. In that case a readonly dictionary probably won't work efficiently (although it could be worth a test, if you have...
@191919 wow that's strange because it seems to be in the initialization phases, where 32 + 2048 bytes only are zeroed. I can't imagine that takes 94% CPU time, something...
@191919 today's update brings a small API change : now you are able to know via the context what was the initial size of the compressed data so it's way...
First of all, thanks for your homebrew formula @alebcay ! I'll check this out soon.
API issue: density_decompress () requires a larger buffer than needed to store the decompressed data
Hey Luke! Yes that's true although density already uses a safe mode when processing the last parts of a given input buffer. The function density_decompress_safe_size() is necessary though, as it...
API issue: density_decompress () requires a larger buffer than needed to store the decompressed data
Yes that would definitely work of course, initially the library was developed with streams in mind so that encoding and decoding could take place simultaneously over a network for example,...
This might be an issue then, because the timings cannot be reliable if anything else runs on the testing machine, as these functions do not count CPU times but absolute...