Client-Decompression support
This PR adds client-decompression support for web-socket connections using Zlib. This also partially resolves #55.
Zlib
The zlib stuff are copy-pasted from NIO-extras, but i've also removed some stuff that we don't need and added some that we do.
The reason for the copy paste was that i basically asked the NIO team if we can make NIOHTTPDecompression.Decompressor of NIO-extras public so we can use it in this package, and Cory answered:
We very deliberately don’t want those to be API. Copy-pasting them to your repo is fine :slightly_smiling_face:
Then i decided to go full on copy-pasting and don't pull NIO-extras only because of the small Zlib c-introp target.
Manual Tests
I've tested this PR on Discord Gateway connections using my Discord library (this branch) and it was working like its not even using compression.
Discord uses deflate but i've also left the option of gzip (like NIO-extras).
FrameSequence
I also simplified the WebSocket's FrameSequence.
Previously it used to always turn a text's buffer into a string, which is wasteful if you need the actual data for for-example decoding JSON anyway.
TODO
~~This PR is still not final.~~ I need to:
- [x] Write tests.
- [x] Decide how to allocate decompression-buffer's initial capacity.
Codecov Report
Merging #123 (32b517b) into main (2d9d218) will decrease coverage by
4.17%. The diff coverage is84.28%.
:exclamation: Current head 32b517b differs from pull request most recent head 8e3a62a. Consider uploading reports for the commit 8e3a62a to get more accurate results
Additional details and impacted files
@@ Coverage Diff @@
## main #123 +/- ##
==========================================
- Coverage 73.37% 69.20% -4.17%
==========================================
Files 5 8 +3
Lines 507 721 +214
==========================================
+ Hits 372 499 +127
- Misses 135 222 +87
| Impacted Files | Coverage Δ | |
|---|---|---|
| Sources/WebSocketKit/WebSocketClient.swift | 75.00% <66.66%> (-1.80%) |
:arrow_down: |
| Sources/WebSocketKit/WebSocketHandler.swift | 50.00% <75.00%> (-5.32%) |
:arrow_down: |
| ...ketKit/Concurrency/Compression/Decompression.swift | 84.00% <84.00%> (ø) |
|
| Sources/WebSocketKit/WebSocket.swift | 74.90% <90.90%> (+2.79%) |
:arrow_up: |
| ...ocketKit/Concurrency/Compression/Compression.swift | 100.00% <100.00%> (ø) |
|
| ...urces/WebSocketKit/HTTPInitialRequestHandler.swift | 68.88% <100.00%> (+0.70%) |
:arrow_up: |
Codecov seems to making problems for some of the tests (I haven't even touched tests yet)
it appears this will resolve #79 too
the linux 5.5 test was taking too long and i re-ran it. It was due to an untouched test taking too long (AsyncWebSocketKitTests.testWebSocketEcho)
I don't see how it can be related to this PR at all. Probably swift's concurrency didn't do something on-time, if i were to guess. I have seen these kind of problems with async/await before.
EDIT: Gwynne kind-of confirmed that this is not related to this PR:
The problem is quite obviously a race condition
I'll need to check on that later in some days/weeks 🤔
I'm not sure if that blocks this pr though. The current implementation surely works, although it might be suboptimal.
I'll need to check on that later in some days/weeks 🤔
I'm not sure if that blocks this pr though. The current implementation surely works, although it might be suboptimal.
I’m sure we could reuse a lot of this that you have implemented! But without the negotiation, I don’t think the server would even do compression/decompression since the client did not tell it that it supports that.
RFCs are good, but you should also try not to diverge from the reality of how it actually works in practice.
I don't think this will work ....
That's not the case with Discord Gateway connections at the very least. You specify that you want zlib compression using a query parameter in your URL, then you enable the decompression in the library. Everything works nicely thereafter.
That's not the case with Discord Gateway connections at the very least. You specify that you want zlib compression using a query parameter in your URL, then you enable the decompression in the library. Everything works nicely thereafter.
Oh, interesting! Today I learned. 😊
Is this a common pattern or is it Discord-specific?
I'm not sure. I will have to check later when i have time.
Thank you @fumoboy007 for pointing me to the RFC. I'm working on this now ...
The approach this library has taken, is that you let the websocket library know that you'd prefer compression enabled than not, then the library would try to do the negotiation with the ws server, and if successful, compression will be enabled. Otherwise compression won't be enabled but the connection will still be established with the ws server.
I dug a little bit more into this, and it appears that Discord is actually not using an standard implementation of Websocket decompression, so I'll need to make sure the PR supports at least some standards before trying to merge it.
Specifically, I played a little bit with "permessage-deflate no-context-takeover" mode and it seems the current implementation does not work well with that. The PR is generally fine/ok, structure-wise, but i'll probably need to make sure correct values are passed to the zlib decompressor so it can work with permessage-deflate no-context-takeover mode.
Will this support compression for WebSocket client as well or just as a server?
This PR, whenever I actually finish it, will only support decompression as a websocket client. That's the plan at least. We can then move forward from there.
Do you have any use cases for compression as a client though? I was looking for some test subjects to test the PR against in real world, and Google wasn't too helpful.
@MahdiBM I have a very specific use case. My Vapor application is a high throughput proxy that forwards HTTP or ws RPC requests from the client to a pool of backends through WebSocket connections. Right now we have many TBs of traffic per day between the proxies and the backend pool over WebSocket. As the requests and responses are JSON and we have data around gzip over HTTP we believe that we can reduce bandwidth at least 5x (probably even 7-8x).
So yes, we would need this mainly for WebSocket clients. Very happy to test this out ahead of the official release. What would be the best way to contact you so we can coordinate it?
@koraykoska In that case, your best chance is probably to fork websocket-kit, and use compress-nio to compress/decompress the data sent (e.g. this is where you send text, where you receive any data).
This PR is not ready so there is not much to test. The current state is that it supports only decompressing and only when client-side (or that's what I recall), which misses the compress-on-server-side part you need.
If you're sending texts (or text-frames more specifically) there is also this issue which could help you optimize (and is fairly easy to do).
@MahdiBM I am not in a rush with this. And implementing it myself instead of waiting for the PR to be ready seems like a waste of time. Unless you believe it's going to take rather long until it gets merged. If not I'll simply wait. But thanks for the heads up about the separate callback.
@koraykoska I'm honestly not sure how much time this is going to take. Could be a week or two or 6 months.