Kang Seonghoon

Results 93 comments of Kang Seonghoon

As per #29, these would be implemented as the following additional output formats: * `-F8gz` * `-F8zip` * `-F8zpng` I originally used `-F6gz` and so on, but thinking about that...

I've also briefly considered `-F8zwebp` which uses WebP Lossless instead of PNG, but it wasn't significantly better than (optimally recompressed) PNG because both use bytewise LZ77 + Huffman coding as...

As of 2.1.0 it still remains true that g(x) and h(x) performs much worse than f(x) even after `-O2`.

So far the best I could come up with was `(ι.charCodeAt(ρ++)-XX||YY)`, which will cost ~9 additional bytes. While this overhead is negligible I kinda want a cleaner solution...

# Combine predictions and counts if possible For many inputs the optimal precision is around 12, which means that 3--4 bits are wasted per context. And it seems that the...

# Use XOR instead of addition The current decoder contains the following fragment: ```js (y=y*997+(o[l-C]|0)|0) ``` This can be replaced with `(y=y*997^o[l-C])` if we use XOR instead of addition in...

# Pre-XOR transform There are only two places where the output is directly read: * The context hash `(y=y*997*(o[l-C]|0)|0)` * The output update `o[l++]=a-=K` where K = 2inBits If we...

# Use WOFF2 as a Brotli container The current Roadroller output is competitive with Brotli even with the decoder included, but Brotli still works better in some inputs (although only...

# Other models So far I have implemented (probably incorrectly) following models as experiments: * [Indirect Context Model](http://mattmahoney.net/dc/dce.html#Section_413) where the bit history is (0..10 zeroes, 0..10 ones, last bit) *...

# Alternative hash table representation The current Roadroller model uses a size-limited hash table like most other context model implementations, but this is not strictly necessary. Anything that can map...