SplitHTTP h3 h2 multiplex controller
Originally this was reported as a panic under #3556, and the changes in there had some effect on this. But slowly the issue became about some unrelated v2rayNG bug. That bug is fixed now, but the dialerProxy issue remains.
configs:
config-sh-h3.json config-sh-h3-server.json
./xray -c config-sh-h3-server.json
./xray -c config-sh-h3.json
command to reproduce:
$ curl -x socks5h://127.0.0.1:2080 ifconfig.me
curl: (52) Empty reply from server
error in the logs when using d8994b7:
transport/internet/splithttp: failed to send download http request > Get "https://127.0.0.1:6001/6e67de80-f752-4df0-a828-3bcc3d1aaaf6": transport/internet/splithttp: unsupported connection type: %T&{reader:0xc0004658f0 writer:0xc000002250 done:0xc0002c84e0 onClose:[0xc000002250 0xc000002278] local:0xc000465890 remote:0xc0004658c0}
when reverting d8994b7, the client crashes instead:
panic: interface conversion: net.Conn is *cnc.connection, not *internet.PacketConnWrapper
goroutine 67 [running]:
github.com/xtls/xray-core/transport/internet/splithttp.getHTTPClient.func2({0x15735c8, 0xc000311ae0}, {0x0?, 0xc00004f700?}, 0xc00031e4e0, 0xc0001e14d0)
github.com/xtls/xray-core/transport/internet/splithttp/dialer.go:108 +0x145
github.com/quic-go/quic-go/http3.(*RoundTripper).dial(0xc0002f7ce0, {0x15735c8, 0xc000311ae0}, {0xc00034ea30, 0xe})
github.com/quic-go/[email protected]/http3/roundtrip.go:312 +0x27a
github.com/quic-go/quic-go/http3.(*RoundTripper).getClient.func1()
github.com/quic-go/[email protected]/http3/roundtrip.go:249 +0x77
created by github.com/quic-go/quic-go/http3.(*RoundTripper).getClient in goroutine 66
github.com/quic-go/[email protected]/http3/roundtrip.go:246 +0x289
QUIC transport probably has identical issue: https://github.com/XTLS/Xray-core/blob/a0040f13dd42264bf0790ce4fe770fd350fae585/transport/internet/quic/dialer.go#L151-L161
这个是 common/net/cnc/connection.go 下的 type connection struct,但它还没有实现 net.PacketConn,我写一下
即使实现了 ReadFrom 和 WriteTo,type connection struct 的 local 和 remote 都是 0.0.0.0,最后在 quic-go 这里 panic 了:
func (m *connMultiplexer) AddConn(c indexableConn) {
m.mutex.Lock()
defer m.mutex.Unlock()
connIndex := m.index(c.LocalAddr())
p, ok := m.conns[connIndex]
if ok {
// Panics if we're already listening on this connection.
// This is a safeguard because we're introducing a breaking API change, see
// https://github.com/quic-go/quic-go/issues/3727 for details.
// We'll remove this at a later time, when most users of the library have made the switch.
panic("connection already exists") // TODO: write a nice message
}
m.conns[connIndex] = p
}
或许给 local 随便填个值骗一下它?此外我不确定 cnc 的另一端是否知道这是 UDP 而不是 TCP,~~从 WG 能工作来看可能是知道~~
其实这个问题可以以后解决,~~甚至无需解决~~,毕竟 SplitHTTP H3 基本上无需结合 dialerProxy,~~我是原来出站的配置没改好才遇到的~~
我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2240883881 也是出现第二条连接时才 panic,有一点点共通性
我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?
@dyhkwong 这个问题,应该不是必须手动 OpenStream() 吧
SplitHTTP H3 也有 globalDialerMap,但比较奇怪的是 quic-go 的 http3 没自动复用连接,每次都 Dial,是哪里没设置好吗
可能 quic-go/http3 就没支持吧 自己没实现stream 复用earlyConnection或者UDPConn都会报错( 还是mux吧
Maybe quic-go/http3 doesn't support it. I didn't implement stream myself. Reusing earlyConnection or UDPConn will result in an error ( or mux).
According to rfc 9114 section 4.1 only one request can be sent on each stream
A client sends an HTTP request on a request stream, which is a client-initiated bidirectional QUIC stream; see Section 6.1. A client MUST send only a single request on a given stream. A server sends zero or more interim HTTP responses on the same stream as the request, followed by a single final HTTP response
可能 quic-go/http3 就没支持吧 自己没实现stream 复用earlyConnection或者UDPConn都会报错(
https://github.com/XTLS/Xray-core/pull/3565#issuecomment-2241348793 上行那么多 POST 总不能都是开新连接吧,~~那也太酸爽了~~,感觉还是能复用的,“但不知道为什么代理个新连接它就不复用了”,难道是因为 GET?@mmmray what do u think?
还是mux吧
否了,MUX over QUIC 会有队头阻塞,H3 的一大优势就没了
否了,MUX over QUIC 会有队头阻塞,H3 的一大优势就没了
看了下群,防止误解,这里指的是 Xray 的 MUX over QUIC 的 single stream
I have only seen this lack of connection reuse with HTTP/1.1. There, it is inherently because of the protocol: A chunked-transfer cannot be aborted by the client without tearing down the TCP connection. Upload was still correctly reused.
In h2 it works normally already. I still have to catch up with how QUIC is behaving here, but I think there is no inherent reason related to the protocol.
You can try to create a separate RoundTripper for upload and download, to see if GET interferes with the connection reuse of POST. This is how I debugged things in h1. If nobody does it I can take a look next week.
I can take a look next week.
~~你吓我一跳,我看了下日期,原来今天是周日~~
反正目前“我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?” https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2241001918
Maybe quic-go/http3 doesn't support it. I didn't implement stream myself. Reusing earlyConnection or UDPConn will result in an error ( or mux).
According to rfc 9114 section 4.1 only one request can be sent on each stream
A client sends an HTTP request on a request stream, which is a client-initiated bidirectional QUIC stream; see Section 6.1. A client MUST send only a single request on a given stream. A server sends zero or more interim HTTP responses on the same stream as the request, followed by a single final HTTP response
Machine translatior misinterpreted my word What I'm talking about is opening streams to reuse QUIC connection, not reuse QUIC stream
SplitHTTP H3 也有 globalDialerMap,但比较奇怪的是 quic-go 的 http3 没自动复用连接,每次都 Dial,是哪里没设置好吗
调试了一下代码,发现不是 quic-go 的问题,~~多少有点搞笑~~,SplitHTTP 的 dialer.go 里有处:
if isH3 {
dest.Network = net.Network_UDP
导致最后存了之后:
globalDialerMap[dialerConf{dest, streamSettings}] = client
下一次开头没能 found:
if client, found := globalDialerMap[dialerConf{dest, streamSettings}]; found {
return client
}
不过全部复用一条 QUIC connection 不一定就好,所以我会先 commit,不急着发下一个版本,你们测一下速率有没有差异
原来是压根没找到client么( 当初这么写的原因是不这么写下面的dialer不知道应该返回udpConn(
我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?
但我测出来 https://github.com/XTLS/Xray-core/commit/22535d86439952a9764d65119bcc739929492717 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,~~准备再次从 WireShark 开始看一下~~
我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?
但我测出来 22535d8 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,~准备再次从 WireShark 开始看一下~
好像是发送内层 Client Hello 前有一次往返,总之你们都测下延迟看看 WireShark 吧,~~我先睡觉了~~
我发现 SplitHTTP H3 的延迟是 H2 的两倍,看起来没有复用?
但我测出来 22535d8 H3 的延迟还是比 H2 高 3/4,测试用 URL 是 HTTPS 是 2-RTT,~准备再次从 WireShark 开始看一下~
好像是发送内层 Client Hello 前有一次往返,总之你们都测下延迟看看 WireShark 吧,~我先睡觉了~
分析了一下抓包结果 我发现这个http3的请求似乎是阻塞的 splithttp需要GET和POST两个请求才能建立连接 现在的行为是先GET后POST 在使用h2的情况下 这两个请求被同时发出 但是在使用h3的情况下 当服务端返回了200ok后 客户端才会发起POST请求 等于多了一个RTT 造成了额外延迟
下面的wireshark截图
这里是h2 GET和POST被同时发送
这里是h3 GET发出后收到服务器的200 OK 才发起POST请求
很神奇 我以为是h3 client的问题 但是我尝试用两个client处理请求 结果在两个QUIC connection里 一个里的POST还是会等到另一个GET请求发出再发出 跟他们有心灵感应一样(
So is it maybe the server that enforces this synchronization?
So is it maybe the server that enforces this synchronization?
It is obvious taht the time of the request being send is controlled by the local client
很神奇 我以为是h3 client的问题 但是我尝试用两个client处理请求 结果在两个QUIC connection里 一个里的POST还是会等到另一个GET请求发出再发出 跟他们有心灵感应一样(
哪怕是把上下行其中一个替换为h2 仍然有这个行为((
分析了一下抓包结果 我发现这个http3的请求似乎是阻塞的 splithttp需要GET和POST两个请求才能建立连接 现在的行为是先GET后POST 在使用h2的情况下 这两个请求被同时发出 但是在使用h3的情况下 当服务端返回了200ok后 客户端才会发起POST请求 等于多了一个RTT 造成了额外延迟
又调试了一下代码,过程不表,发现是 SplitHTTP client.go OpenDownload 函数里这一段的问题:
trace := &httptrace.ClientTrace{
GotConn: func(connInfo httptrace.GotConnInfo) {
remoteAddr = connInfo.Conn.RemoteAddr()
localAddr = connInfo.Conn.LocalAddr()
gotConn.Close()
},
}
H2 时,除了第一次,都会立即回调 GotConn 从而 gotConn.Close(),使 OpenDownload 函数和 dialer.go 的 Dial 函数立即返回
H3 时,GotConn 从未被回调过,导致 c.download.Do(req) 后 OpenDownload 函数才返回,并且没拿到 remoteAddr 和 localAddr
quic-go 尚未支持 httptrace:https://github.com/quic-go/quic-go/issues/3342
既然现在 H3 时没拿到 remoteAddr 和 localAddr,先改成直接 gotConn.Close() 避免阻塞,至于拿地址,@mmmray 再研究吧
To ⑥:SplitHTTP 当然能用于反向代理,~~只是反过来的话速率会比较感人~~
To ⑥:SplitHTTP 当然支持 acceptProxyProtocol,~~只是你放 CDN 后面时有 X-Forwarded-For,它优先级更高~~
UDP 的话:https://github.com/pires/go-proxyproto/issues/88
Thanks for investigating. Is this the main reason for the slowness, or is there some other synchronization between POST requests as well? Does maxConcurrentUploads work correctly?
I'm trying to remember what I considered on the first splithttp PR, instead of using httptrace. I believe all options were terrible, and it just got more complicated when I tried to gradually eliminate RTT.
inside of the dialer, we already have access to the raw connection. So one could pass it upwards by setting it on the DialerClient type.
However, it will only be called once, and there is no guarantee that it actually corresponds to the IP address used by the HTTP request. I think it's better to log nothing than to log something that could be wrong.
By the way, in your fix one can use if c.isH3 instead of this casting of RoundTripper.
By the way, in your fix one can use
if c.isH3instead of this casting of RoundTripper.
~~准备 force-push~~
还有关于 SplitHTTP 的 dialerProxy 修起来也不难,给 type connection struct 实现 ReadFrom 和 WriteTo(Buffer.UDP),~~然后给 local 填个随机值骗一下 quic-go 就行~~,参考 https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2240871481 https://github.com/XTLS/Xray-core/issues/3560#issuecomment-2240883881 https://github.com/XTLS/Xray-core/issues/3556#issuecomment-2241031881 @mmmray