node-lz4
node-lz4 copied to clipboard
Ten times slower than gzip?
I performed a few benchmarks to see how much faster lz4 would be compared to gzip when decompressing an incoming HTTP payload. I'm not sure if I'm doing something wrong, but it seems that this library is a lot slower than just using plain gzip.
I had 3 test JSON docs of different sizes, that I compressed with both LZ4 and gzip:
| Uncompressed | LZ4 | gzip |
|---|---|---|
| 7,369 bytes | 3,975 bytes | 2,772 bytes |
| 73,723 bytes | 33,028 bytes | 21,790 bytes |
| 716,995 bytes | 311,365 bytes | 202,697 bytes |
The LZ4 version was compressed using default options:
uncompressed.pipe(lz4.createEncoderStream()).pipe(compressed)
The gzip version was compressed from my macOS command line:
gzip uncompressed.json
I used autocannon to hammer a test HTTP server with the compressed document. The server would decompress the payload but otherwise discard it afterwards.
Here's an example of how autocannon was configured:
autocannon -i body.json.lz4 -H 'Content-Encoding: lz4' localhost:3000
And here's my test server running on localhost:
'use strict'
const http = require('http')
const zlib = require('zlib')
const lz4 = require('lz4')
const server = http.createServer(function (req, res) {
const enc = req.headers['content-encoding'] || ''
let decompressed
if (/\bgzip\b/.test(enc)) {
decompressed = req.pipe(zlib.createGunzip())
} else if (/\blz4\b/.test(enc)) {
decompressed = req.pipe(lz4.createDecoderStream())
} else {
decompressed = req
}
const buffers = []
decompressed.on('data', buffers.push.bind(buffers))
decompressed.on('end', function () {
const data = Buffer.concat(buffers)
res.end()
})
})
server.listen(3000, function () {
console.log('Server listening on http://localhost:3000')
})
Test 1 - Decompressing a 7,369 byte JSON document
LZ4 (3,975 bytes):
Stat Avg Stdev Max
Latency (ms) 23.29 9.31 61.38
Req/Sec 419.7 13.36 434
Bytes/Sec 41.4 kB 1.31 kB 43 kB
4k requests in 10s, 416 kB read
Gzip (2,772 bytes):
Stat Avg Stdev Max
Latency (ms) 1.07 0.67 13.48
Req/Sec 7064.4 704.93 7733
Bytes/Sec 699 kB 67.8 kB 766 kB
71k requests in 10s, 6.99 MB read
Test 2 - Decompressing a 73,723 byte JSON document
LZ4 (33,028 bytes):
Stat Avg Stdev Max
Latency (ms) 23.28 8.94 55.9
Req/Sec 419.8 11.45 435
Bytes/Sec 41.8 kB 1.1 kB 43.1 kB
4k requests in 10s, 416 kB read
Gzip (21,790 bytes):
Stat Avg Stdev Max
Latency (ms) 2.7 1.61 21.23
Req/Sec 3131 105.16 3342
Bytes/Sec 313 kB 13.1 kB 331 kB
31k requests in 10s, 3.1 MB read
Test 3 - Decompressing a 716,995 byte JSON document
On a large document like this the difference between gzip and lz4 is much smaller, but gzip still wins:
LZ4 (311,365 bytes):
Stat Avg Stdev Max
Latency (ms) 41.56 13.21 102
Req/Sec 237.6 6.95 250
Bytes/Sec 23.7 kB 819 B 24.8 kB
2k requests in 10s, 235 kB read
Gzip (202,697 bytes):
Stat Avg Stdev Max
Latency (ms) 26.11 6.51 137.09
Req/Sec 375.4 7.61 381
Bytes/Sec 37.5 kB 819 B 37.7 kB
4k requests in 10s, 372 kB read
I'm seeing a big difference between node and the browser for a 17mb text. 16s in browser vs 6.5 in NodeJs 10
The browser implementation uses pure JS to decode/encode, whereas the node one binds to the C implementation.