Xray-core
Xray-core copied to clipboard
SplitHTTP client: Add multiplex controller for H3 & H2
我不确定我有没有正确理解其工作原理,并且我写完之后才发现上游已经更新了一堆提交,可能我合并的时候也哪里改炸了。直觉告诉我这可能有资源泄露问题,至少好像能用,所以我提交了。 这里假定 http.Client 会对每个请求复用连接,我实际使用的时候也看起来像是对请求复用连接。所以 idk.
这个代码其实挺烂的,我稍后再改,或者还是等另一个人去以更好的方式实现吧,至少肯定不是我。
注:好像 Interval 更倾向于定时任务的间隔,如果是硬限制最小间隔可能 Delay 更符合表达习惯。并且可能那 3 个 sc 参数也应该移到 mux
感谢 PR 虽然目前只有 SH 能用 但是考虑一下我们要不要把选项并入 https://xtls.github.io/config/outbound.html#muxobject
em 毕竟原理不一样 这个是复用client而不是mux.cool 所以放这还算河里吧
Good job!~~这下又不用自己写代码了~~,晚点我 review 一下
Using xray version including these commits causes crash after a while when using on client, SplitHTTP with h3 alpn. If needed i can send my configs, i used new mux mechanism with h3. Server uses same version taken from Github Actions. I don't know if this helps. I just enabled new mux and tried "prefer_new", "prefer_reuse" and "prefer_existing", didn't change other options.
panic: connection already exists
goroutine 400 [running]:
github.com/quic-go/quic-go.(*connMultiplexer).AddConn(0x400007d0e0, {0x75a9ec4f40?, 0x40000702f8?})
github.com/quic-go/[email protected]/multiplexer.go:59 +0x198
github.com/quic-go/quic-go.(*Transport).init.func1()
github.com/quic-go/[email protected]/transport.go:266 +0x3b0
sync.(*Once).doSlow(0xb?, 0x1?)
sync/once.go:74 +0x100
sync.(*Once).Do(...)
sync/once.go:65
github.com/quic-go/quic-go.(*Transport).init(0x4000d18700, 0x78?)
github.com/quic-go/[email protected]/transport.go:225 +0x58
github.com/quic-go/quic-go.(*Transport).dial(0x4000d18700, {0x5db44f6450, 0x40021bdea0}, {0x5db44ed5b0, 0x4000d7d5f0}, {0x0, 0x0}, 0x4000d109c0, 0x4000d1ab40, 0x1)
github.com/quic-go/[email protected]/transport.go:212 +0x70
github.com/quic-go/quic-go.(*Transport).DialEarly(...)
github.com/quic-go/[email protected]/transport.go:204
github.com/quic-go/quic-go.DialEarly({0x5db44f6450, 0x40021bdea0}, {0x5db44fc670?, 0x40000702f8?}, {0x5db44ed5b0, 0x4000d7d5f0}, 0x4000d109c0, 0x4000d1ab40)
github.com/quic-go/[email protected]/client.go:95 +0xfc
github.com/xtls/xray-core/transport/internet/splithttp.createHTTPClient.func2({0x5db44f6450, 0x40021bdea0}, {0x5db3d00321?, 0x5bd7?}, 0x4000d109c0, 0x4000d1ab40)
github.com/xtls/xray-core/transport/internet/splithttp/dialer.go:130 +0x1b4
github.com/quic-go/quic-go/http3.(*RoundTripper).dial(0x4000251e30, {0x5db44f6450, 0x40021bdea0}, {0x4003602db0, 0x10})
github.com/quic-go/[email protected]/http3/roundtrip.go:312 +0x224
github.com/quic-go/quic-go/http3.(*RoundTripper).getClient.func1()
github.com/quic-go/[email protected]/http3/roundtrip.go:249 +0x7c
created by github.com/quic-go/quic-go/http3.(*RoundTripper).getClient in goroutine 397
github.com/quic-go/[email protected]/http3/roundtrip.go:246 +0x258
我这边大致的测试中:prefer_new更友好
在alpn是h2时若使用prefer_reuse,反而会变慢,不如关闭mux
等 https://github.com/XTLS/Xray-core/pull/3624 和 https://github.com/XTLS/Xray-core/pull/3643 合并后 rebase 一下我再看看吧
@ll11l1lIllIl1lll 可以 rebase 一下了,基于 main
先改了下标题,晚点我比照着 https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2247495778 看下代码
general comment: i think there are already too many things called mux. most are related to the v2ray universe. there is some dangerous overlap with http with "h2mux" in sing-box. this is a mux but the rfcs don't call it that. can it be called "connection pool settings" or something else?
首先 mode 不需要那么多别名,其次基础模式是二选一不是共存,prefer_reuse 就对应 concurrency,prefer_new 就对应另一个
好像没看到限制“一个连接最多被累计复用多少次”的代码
https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2247495778 下面说的 send rate 和 number of bytes sent/rcvd 等先不加,~~只是提一下,毕竟 PR 里也没有~~
general comment: i think there are already too many things called mux. most are related to the v2ray universe. there is some dangerous overlap with http with "h2mux" in sing-box. this is a mux but the rfcs don't call it that. can it be called "connection pool settings" or something else?
可以叫 XMC:Xray Multiplex Controller,因为这套机制完善后还要加给 Xray 的 Mux、gRPC、H2 等,~~虽然可能会把 H2 删掉~~
@Fangliding xPaddingBytes 的文档
@Fangliding 以及 "Cache-Control: no-store"
it's too awkward to update the document when the release is not out. starting some updates here https://github.com/XTLS/Xray-docs-next/pull/558
@Fangliding 以及 "Cache-Control: no-store"
#3652 的改动是默认的 没有config选项((((
To konsclufka:~~出来干活了~~
@mmmray ~~又要 rebase 了~~
Is this options for bypassing single connection speed limit of ISPs? Is it possible to add same multi connection controller to gRPC?
I'll take over this PR. I might add some more things if there is time. I hope to have it ready by sunday.
@APT-ZERO The answer is, yes can be added to grpc and a bunch of others, but it's not planned right now, and it can bypass speed limits like that, but I think if everybody uses it it will become meaningless (especially if you think of Irancell). That's all I'll say here. I suggest to go to github discussions or telegram for general discussion, I think this PR thread is already way too long and hard to navigate ~~but i'm often the first one to complain about such things~~
@mmmray 很高兴你接手了这个 PR,~~我觉得既然伊朗人等着新版~~,可以先发个新版看看 main 分支截至目前积累的 changes 有没有bug,~~而且虽然新版一直在等这个 PR 但这个 PR 也放新版的话这次的东西就太多了~~,所以留给你的时间比较充足
I think this satisfies all requirements from https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2247495778
The mux manager now works at http.RoundTripper level, and muxes individual HTTP requests instead of virtual connections.
I think this is better for reusability across transports. It was said somewhere that it should eventually be ported to grpc transport, but it seems that grpc-go actually makes it difficult to use a custom RoundTripper, so it may not happen so soon after all.
The defaults are as follows:
"splithttpSettings": {
"httpMux": {
"concurrency": 0, // infinite concurrency per physical connection
"connections": 0, // do not open any connection "eagerly"
"connectionLifetimeMs": 0, // permit any connection to survive for "infinity" milliseconds
"maxUses": 0 // allow "infinity" sub-connections on one connection before throwing it away
}
}
Those defaults is equal to the current behavior, which is to "mux everything" in a maximally aggressive way. Ironically, when httpMux key is removed, it is actually enabled in the most aggressive way, so it works in the opposite way of mux.cool.
I think the last two parameters are self-explanatory. Let's talk about the modes.
The two modes "max connections" and "max concurrency" are not encoded as modes at all. Instead, you set one parameter or the other:
同时最多多少子连接(concurrency),先把现有连接复用满再开新的
"splithttpSettings": {
"httpMux": {
"concurrency": 8 // each outer connection holds at most 8 concurrent HTTP requests, open more connections as needed
}
}
This should be familar to mux.cool users.
~~There is a caveat: concurrency limits the amount of concurrent HTTP requests, not the amount of concurrent "splithttp virtual connections". SplitHTTP itself has a lot of concurrent requests, so concurrency=1 actually achieves an "un-muxing" effect and will cause dozens of TCP/UDP connections to spawn for a single connection. I think this will be interesting to some people who want to cheat QoS. Otherwise, a more conservative value would be much higher and aligned with scMaxConcurrentPosts.~~ it works at a per-subconnection levle now, not per-request. The behavior is more intuitive, but this cannot be used to improve single-threaded upload anymore.
同时最多多少条 TCP/UDP “连接”,先开新连接直至占满总数再复用
"splithttpSettings": {
"httpMux": {
"connections": 8 // open connections until there are 8 of them, and then reuse them
}
}
When there are many open connections, each ~~HTTP request~~ sub-connection will be sent through a random connection in the pool. (I have not verified this part with any test or wireshark)
Both connections and concurrency can be set at the same time:
"splithttpSettings": {
"httpMux": {
"connections": 8,
"concurrency": 16
}
}
In this case, the mux manager will open 8 connections first, then open more connections if the concurrency limit has been hit on all connections.
Right now this don't apply to http/1.1 at all. http/1.1 doesn't have mux, but actually it has request pipelining, so maybe these settings should still be applied to that functionality somehow.
@mmmray 先发一个 PR 移除与 ok 相关的逻辑?
此外麻烦发个 PR 移除 QUIC 和 DomainSocket 传输层(检测到不存在的传输层就报错)
@Fangliding 那个全局 TransportObject 代码里删了吗?要报错
~~还有啥需要删掉的可以趁着升级 v24 一起删了,要不以后趁着升级大版本正好删东西~~
更新了 v1.8.24 的 release notes,补充上了代码退出机制,修改了 Changes 的描述
然后我觉得这个 log 还是有很大问题,比如说日常 log 竟然没有 ctx id,Warning 有但不知道对应哪个,还得统一一下格式
@RPRX about this PR, do you have opinions on:
- behavior of http/1.1 (
httpMuxcould control pipelining) - defaults (for example, we could default to
{"connections": 8})
otherwise I'll just merge it
I don't think there's any dependency relation between this and the stuff you're talking about.
@mmmray 一般来说上行数据较少,下行数据较多,HTTP/1.1 时下行都是独立的一条条 TCP,所以我觉得不用管 HTTP/1.1
~~这个 PR 我还没看,等我看一下~~