bun icon indicating copy to clipboard operation
bun copied to clipboard

gzip compression of http responses

Open jakeg opened this issue 2 years ago • 8 comments

What is the problem this feature would solve?

Currently all http responses are uncompressed.

What is the feature you are proposing to solve the problem?

Would be good to enable gzip compression for http responses.

@Jarred-Sumner says "this is a flag that the HTTP server Bun uses internally supports, but we haven't exposed it yet"

What alternatives have you considered?

No response

jakeg avatar Apr 24 '23 14:04 jakeg

Any progress on this? It's such an easy win for reducing bytes over the wire.

jakeg avatar Nov 04 '23 23:11 jakeg

+1

icflorescu avatar Jan 29 '24 21:01 icflorescu

+1

andirkh avatar Jan 30 '24 07:01 andirkh

I think I have a workaround solution for anyone who need gzip-compression over http response.

Given: you want to return an html in the response that can be consumed by browser.

  • open the html file, get the string
  • convert it to gzip Bun.gzipSync(htmlString);
  • add appropiate header in the response e.g 'Content-Encoding': 'gzip'

so this my simple code :

Bun.serve({
  fetch(req) {   
    const compressed = Bun.gzipSync(htmlString);
    return new Response(compressed, { 
      headers: { 
        'Content-Type': 'text/html',
        'Content-Encoding': 'gzip' 
      }
    })
  }
})

andirkh avatar Feb 04 '24 08:02 andirkh

Any official support for this on the way?

jakeg avatar Sep 14 '24 05:09 jakeg

Kind of amazed this still isn't a thing yet. It's a little embarrassing that I'm running websites without gzip compression as a result.

jakeg avatar Mar 27 '25 04:03 jakeg

IMO its generally always worth throwing a node / bun site behind nginx for things like this. (And so nginx can serve all your static assets efficiently).

Still, it'd be lovely if bun supported this out of the box.

Also, if you're considering gzip, zstd is available pretty widely now. Its smaller and faster than gzip in all cases iirc. (Also brotli should be considered for static assets!)

josephg avatar Mar 31 '25 06:03 josephg

Not saying it couldn't be a built-in feature, but fairly trivial to add whatever compression logic you'd want (content negotiation with accepted formats, only specific size ranges, etc) to your own server code, e.g. https://nodejs.org/api/zlib.html#compressing-http-requests-and-responses

infrahead avatar Jun 12 '25 15:06 infrahead

Just ran into this. Spent some time this week switching our production app from Deno to Bun. When I made it work, it turned out that Bun does not compress responses by default (understandable for static files) or provide a convenient opt-in API for this.

In our production app, the JS server dynamically generates fairly large HTML pages and serves a variety of static files. Just checked: main page HTML is 300 KB uncompressed, 39 KB compressed; CSS is 524 KB uncompressed, 123 KB compressed. This was auto-compressed by Deno, with Brotli.

Switching to non-compressed responses is out of the question, as it would significantly increase page load latency. We'd rather use slightly more CPU on the server. Per-request compression adds CPU latency, but tends to reduce network latency (with client-side decompression included) far more, for many users.

Implementing compression logic in userland is possible but fiddly.

  • Carefully check HTTP headers and MIME types to choose the compression algorithm.
    • Many MIME types are pre-compressed, such as fonts, most image formats, etc. Further compression (and client-side decompression) is often not beneficial.
    • Zstd can significantly outperform the other algorithms, and should be preferred when available both to the client and to the server.
    • Brotli can be significantly better than Gzip and should be preferred when the client supports it. But you have to lower its quality from the default, see below.
  • Choose between compressing on the main thread vs secondary threads. May have to maintain a Worker pool.
  • Choose the right compression parameters. This is crucial, otherwise you'd be adding latency instead of reducing it.
    • Brotli compression in node:zlib defaults to max quality, which is very slow (140ms for 271 KB of HTML on my machine). Lowering quality to 5 approximately matches Deno's default, dramatically improving performance (down to 5ms in this example) with far less effect on the resulting size.
  • For static files, you'd need to manage a disk or RAM cache (see below).

So what should be done in Bun?

For dynamically generated responses, auto-compression seems like a no-brainer. Bun could provide options to turn this off or modify compression options.

For static files, there are tradeoffs. Auto-compression requires either per-response compression (uses IO, CPU, no sendfile), or caching (uses more storage or RAM, may involve cleanup / eviction logic).

I would settle for an explicit API with a configurable cache for static files. User code would specify where to store the artifacts, either a disk path, or in RAM. Bun would automatically choose the right algorithm, file location, etc, create compressed files as needed, and serve them as efficiently as it can.

Our app currently uses RAM-caching in production, but only as an intermediary, imperfect solution, caching source data instead of compressed data, and relying on Deno to constantly re-encode everything it serves. The "final" solution would be the same for all engines and environments: caching the actual compressed artifacts. (Well, the actual "perfect" solution would involve caching a chunk of bytes representing a complete HTTP response string, and zero-copy writing that to a TCP socket, but the JS standard web APIs don't allow that, so we'll settle for the nearest alternative.) A lot of apps ship with a fairly small amount of static files, and can easily afford to RAM-cache everything to reduce IO. This should be user-configurable, and tends to be a non-issue for a fixed amount of static files shipping with a server container.


If there is a library which correctly implements all the logic outlined above, someone please point it out. Otherwise I'd have to write one (or just stick with Deno).

mitranim avatar Jul 25 '25 12:07 mitranim

Solved this for my projects by implementing an HTTP compressor utility with support for RAM-caching, similar to the plan outlined above. It selects the appropriate algorithm, performs compression, and if caching is enabled, caches compressed artifacts. On repeated requests for static files, those are served from memory without repeating the work. Everything is opt-in. You enable caching in production, but not in development. Caching is used for files, but not for dynamic responses. The whole thing works in both Bun and Deno.

It's part of my "JS standard library" which is used at my company and in some other projects. If anyone's curious, here's the code: compressor, file caching, compressed artifact caching (edit: updated links).

mitranim avatar Jul 28 '25 17:07 mitranim

+1

CodesbyRobot avatar Sep 24 '25 23:09 CodesbyRobot

IMO its generally always worth throwing a node / bun site behind nginx for things like this. (And so nginx can serve all your static assets efficiently).

Still, it'd be lovely if bun supported this out of the box.

Also, if you're considering gzip, zstd is available pretty widely now. Its smaller and faster than gzip in all cases iirc. (Also brotli should be considered for static assets!)

I totally agree. For production-grade apps, nginx is the way to go for compression + static file serving.

but I was also looking for Bun to support this out of the box, which is totally fine for MVPs.

itsjavi avatar Oct 14 '25 17:10 itsjavi