FULI
FULI
> FEC的疑问: -mode fast -datashard 5 -parityshard 5 在15%左右丢包率的线路上FECRecovered/FECParityShards只有1%,即使在35%丢包率的线路上也没啥变化,客户端rcvwnd减半或者ds:ps改成10:3、70:30都会让FECRecovered更少,直接掉到0.03%。 请问这种情况是还没凑齐10个包就触发重传造成的么?测试用的是视频流。 RetransSegs/OutSegs两条线路都是比丢包率高2%~5%,丢包率越高快速重传的占比越大,这种重传就解决问题的情形是不是就没必要开FEC了? 如果用户到服务器的延迟很低,我认为是没有必要开FEC的,重传一次的惩罚并不高,所以实际的优化策略可能还会包含一些RTT测量,再制定策略。
对,这个延迟很低了,而且线路质量很好。
这只是一个工具,不是万能的,具体的策略还需要根据测量来动态调整。
to bind a single UDP port on client side to external server port via server?
It seems this feature is not useful if we can only do remote port forwarding on a SINGLE port
An ideal solution is to create tun devices between client and server, by manipulation of iptables, all UDP packets could be carried to remote server, and then the remote server...
The question is , How users will use this UDP port forward feature. All I know iss that, for me ,single port forwarding is not useful.
https://www.jianshu.com/p/55c0259d1a36 https://develop.socks-proto.cpp.al/socks/protocol/requests_and_replies/udp_associate.html https://ph4ntonn.github.io/Socks5-UDP A possible solution is UDP Associate maybe ss-libev(UDP) -> client(UDP) -> KCP packets -> server(UDP) -> ss-libev(UDP) -> UDP packets is feasible.
So, do you guys think supporting carrying packets of "single port UDP Associate" could satisfy the requirements?
that seems solved the problem, an external UDP to TCP protocol conversion seems simple and elegant. One only have to start another kcp instance pair.