Guillaume
Guillaume
> I created some benchmarks today for Chameleon encoding. C is still faster, but Rust isn't using any of the optimizations that C is utilizing. (Rust is still fast though)...
> Additionally, I've been thinking about a way to parallelize the algorithm (to use multiple CPU cores), but the only way I've found will work with Chameleon in it's current...
Very nice ! I'm curious to see that as well ! Introducing SIMD in the density algorithms was attempted a while ago (see http://cbloomrants.blogspot.com/2015/03/03-25-15-my-chameleon.html) but it did not result in...
Hello ! For compression and decompression to work properly the dictionary used *must* be in the same state when starting a compress/decompress session. On some platforms, if you do not...
The main problem with a small dictionary size is that machine learning of dataset structures (unless they are very simple) will be less efficient, resulting in a decreased compression ratio....
Hey Piotr ^^ Thanks, yes this is looking like a promising idea at first impression. I'll have to investigate and benchmark it but it would clearly greatly reduce the amount...
@191919 , out of testing purposes, would you have a histogram or something equivalent of the data quantity against size you're using for testing ? ie for example 123 datasets...
Ok thanks I see !
Yes I'm definitely going to check it, the only drawback I can see is that some algorithms like chameleon are so fast that even a bit masking and test might...
> Lots of experimentation ahead :) On my side, I'm busy with a Rust rewrite (and improvement) of demixer ( https://encode.ru/threads/1671-Demixer-new-tree-based-bitwise-CM-codec-is-in-development ). Currently I'm working on tree handling and it's...