bbs
bbs copied to clipboard
Mekya Protocol: meek meets mKCP
This article is a more technologically advanced version of the announcement of the mekya protocol intended for engineers, and understanding it may require specialized knowledge and tools. The less technically correct but more exciting version of article can be found here: https://github.com/v2fly/v2ray-core/discussions/3129
Protocol Design
Mekya protocol is designed to be a performance improvement for meek protocol that is designed to work around its performance limitations. It does so by treating many tunnels formed by concurrent HTTP requests and responses as unreliable channels and sending mKCP traffic over it. For both HTTP request and response, more than one datagram packets can be grouped into a single request, the max size and max lingering time can be adjusted. For uploads, HTTP streaming upload is not utilized as many HTTP forwarding services do not support it as a result of Slowloris attack. It is less than critical as the client can initiate a request to the server at any time, though. HTTP streaming download is utilized if available to reduce latency, but it is not really required as the protocol is designed to end the response frequently, so it has the opportunity to be sent to the client even if the middle box is buffering it. It will however, result in reduced performance. This has ensured that mekya supports as many HTTP forwarder services as possible, while taking the advantage of those that support HTTP streaming download for better performance opportunistically. Mekya typically generates [ 200K.. 700K ] requests per hour during testing and consume as much traffic as mKCP, however this behavior is tunable with configuration.
Similar protocols
By design mekya is a HTTP/2 Reflect-able protocol, which means it can be forwarded by a HTTP forwarder service that do not understand HTTP/1.1 Upgrade or features depends on lt like WebSocket or HTTP3/QUIC features like WebTransport. There are 2 other protocols being compared here: meek and SplitHTTP.
The following 2 paragraph are directly copied from the ChatGPT translation of mekya promotion material:
In the meek protocol, the streaming payload connection is divided into smaller data blocks and transmitted sequentially through HTTP requests to the remote server or received from the server by the client. This design mandates that requests be sent and received in order. High latency can cause substantial delays due to the time spent awaiting server responses or client requests, resulting in reduced transfer speeds. This approach is suited for the transfer of metadata where high-speed transmission is not critical.
In the SplitHTTP protocol, upload data is segmented into blocks and tagged with sequence numbers to permit out-of-order transmission, then transmitted to the server via HTTP requests. Download data is streamed to the client using the HTTP protocol. This design mitigates connection latency issues through out-of-order transmission uploading and pipelined downloading; however, it is still impacted by single-connection network conditions. Furthermore, due to its reliance on HTTP streaming, it can leverage only a subset of HTTP Reflectors (like Reverse Proxy or CDN).
(End of ChatGPT assisted part.)
All these 3 protocols allow its traffic to be forwarded by a service that only understands the common features of HTTP1, HTTP2 and HTTP3. In this way, the service provider can utilize many forwarders to reduce the likelihood of full blockage. However, because of differences in design, there are differences when it comes to their characteristics. Table 1 summarizes the features of the various protocols implemented in V2Ray v5.17 and XRay 1.8.23. (This version has slightly different content than the promotion article version)
Table 1 - Characteristics of Various HTTP/2 Reflect-able Protocols
| meek (V2Ray) | SplitHTTP | meyka (V2Ray) | |
|---|---|---|---|
| Transfer Speed | Slow | Dependent on network conditions | Dependent on network conditions |
| Multi-threaded Transmission | Not supported | Upload only | Bidirectional |
| Requires Streaming Download | No dependency | Required | Recommended but not required |
| Self-healing for Stalled Connections | Not supported | Not supported | Yes |
| Self-healing for Network Interruptions | Not supported | Not supported | Supported |
| Protocol Tuning | Supported, advanced use case | Limited tunable parameters | Supported, advanced use case |
| Computation Resource Consumption | Low | Low | High |
How to try it
Currently, mekya is not a documented and exported function in V2Ray yet, the use of engineering mode configuration is required.
Working configuration can be found at performance comparison experiments artifacts: https://gist.github.com/xiaokangwang/1a99130a498f0eca0e61253d6c791596. (v2ray_config_client.json is the config file for client, and v2ray_config.json is the config file for server) The comparison itself is not that useful as it is specify designed in a way that showcase the mekya protocol and the the real network conditions will vary from user to user. But it runs nonetheless.
Please let me know if there is any feedback or comment!
It would be better to publish a protocol specification with some explanations and suggestions instead of a high-level view for potential implementers, especially non-gopher.
It would be better to publish a protocol specification with some explanations and suggestions instead of a high-level view for potential implementers, especially non-gopher.
Yes! I understand that a protocol specification would be beneficial for 3rd party implementation of the protocol. Sadly in reality a protocol have to be designed from ground up with 3rd party implementation in mind to make it easy to be implemented by others. In Mekya's case, it reuse the mKCP protocol in V2Ray, which is a big first party protocol without full documentation. So getting it fully documented will be a significant work, which would consume time that would be better if used to improve the protocol or develop new ones. For this reason, 3rd party re-implementation of this protocol is appreciated but not expected.
So getting it fully documented will be a significant work, which would consume time that would be better if used to improve the protocol or develop new ones.
Yeah, I understand that it is really bother for developers to have to keep an eye on such work. Anyway, anti-censorship community will eventually benefit from well-documented protocols. 🎉
https://github.com/v2fly/v2ray-core/discussions/3129#discussioncomment-10469813
- I think SplitHTTP may eventually be "specified", the basic protocol is simple enough for as long as one picks a good HTTP library. Remind me in a few months when the dust around recent padding-related changes has settled...
- I find that outside of Cloudflare and nginx/direct connections, there are a few "challenging" CDN when it comes to performance, as they add both latency to individual requests and also limit the amount of concurrent requests that can be done. You simulate request latency in your benchmarks but were there any tests for the latter? In xray splithttp there is
scMinPostsIntervalMs, or as a last resort,scMaxConcurrentPosts. In v2fly, isuplink_capacityanddownlink_capacitythe setting I'd be looking for when the code comes out? - In general, was this tested on many CDN?
Finally, happy to see some development on v2fly again, and excited to take a look again when mekya comes out.
* I find that outside of Cloudflare and nginx/direct connections, there are a few "challenging" CDN when it comes to performance, as they add both latency to individual requests and also limit the amount of concurrent requests that can be done. You simulate request latency in your benchmarks but were there any tests for the latter? In xray splithttp there is `scMinPostsIntervalMs`, or as a last resort, `scMaxConcurrentPosts`. In v2fly, is `uplink_capacity` and `downlink_capacity` the setting I'd be looking for when the code comes out? * In general, was this tested on many CDN?
nginx supports rate limiting based on number of requests. I think it is the best approximation of imitation of concurrent connections. This can be used to inject fault to see how the protocol would react.
There are settings like max_write_delay,max_request_size and polling_interval_initial that determine the amount of requests being sent. However there is no hard limit on amount of request send in any case, as it is not designed to hold request at all(It is KCP after all).
As for CDN, I have not tested it performance on CDN other than cloudflare yet... Sorry I have very limited time and energy to work on V2Ray so....
Here are some previous notes on integrating a Turbo Tunnel design into meek, with QUIC or KCP.
- #21
- meek section of the Turbo Tunnel paper
- turbotunnel-quic branch and diff
- turbotunnel-kcp branch and diff
In those early versions I noticed poor performance. In a merged turbotunnel branch (which uses KCP), I made some important performance optimizations learned in the course of implementing dnstt and Champa, particularly: