Seeking clarification on protocols’ structural viability in 2025
Hi,
I’m preparing for a one-year stay in China for work and would like to gently ask the community for updated clarifications on the current state of various transport protocols, particularly those that could serve as reliable fallback or backup options to Vision-Reality-XHTTP or similar Xray setups that I could install on my VPS.
My intention is not to ask what works right now under specific regional or temporal conditions, but to get a better understanding of what could still be considered structurally viable, from a protocol-level perspective, not depending on circumstantial (political, technical) gaps in GFW coverage, but rather on robust design principles.
To be clear, I’m approaching this from the assumption that anything that has ever been broken, blocked, throttled, or fingerprinted in the past, whether by real-world deployments or formal analysis (e.g., in academic papers or widespread user reports), should be considered tainted unless it has since been fixed or fundamentally redesigned.
Based on what I’ve read, protocols like WireGuard, Shadowsocks, ShadowsocksR, Trojan (and Trojan-Go), VMess, and ShadowTLS v3 appear to be either fully obsolete or irreparably flawed, in the context I am talking about.
However, I’d like to ask about the status of the following:
-
NaïveProxy I’ve read the notes in the repo known weaknesses. I’m not sure whether that section is still up to date or what it implies for detectability. ~~But I guess the (in)famous tls-in-tls issue make it blockable by TLS fingerprint analysis,~~ traffic pattern or ML classifier.
-
Hysteria v2 Particularly when using CUBIC or BBR, but without Brutal, in light of the concerns raised in this paper: The Discriminative Power of Cross-layer RTTs in Fingerprinting Proxy Traffic
-
TUIC Regarding QUIC Censorship https://github.com/tuic-protocol/tuic or https://github.com/Itsusinn/tuic
-
ResTLS / AnyTLS / OverTLS Do any of the current TLS mimicry projects still offer something useful?
- In the case of AnyTLS, reading through protocol and the code it seems to me that several points (fixed 7-byte header and specific opcode values, the unique pattern of early exchange of settings, default padding scheme with tight ranges and padding0 structure, the cmdSYN and cmdFIN heartbeat...) would shape consistent size/order/timing observable sequences for adversaries.
- In the case of OverTLS, the answer to the SNI problem does not seem very elegant
-
ShadowQUIC It seems relatively new. Is it actually usable yet, or still experimental?
-
Cloak Is this still useful, or has it been rendered obsolete in practice or design? Has this been addressed? I don't see any issue mentioning it and nothing in the changelog. Issues #327 and 246 regarding browser signature maintenance seem to be fixed since 2.0.8 (and up to date with the latest version of uTLS in this current 2.12.0)
-
Gost (or more recently go-gost from the same author) If what impacts Gost according to this paper has been improved? Fingerprinting Obfuscated Proxy Traffic with Encapsulated TLS Handshakes
-
Meiru / Mita I tried to read the protocol and also looked a bit at the code. I don’t have any crypto/security background, but one thing stood out:
const KeyIter = 64 // in pkg/cipher/keygen.goThat’s 64 PBKDF2-SHA256 iterations, which feels very low and insecure especially considering OWASP’s 2024 recommendation of ≥ 600,000 iterations. Is there a reason for this? (some other elements of the protocol documentation and code have me wondering)
-
And regarding transport methods more generally: does gRPC, WebSocket, KCP, or leveraging CDN still offer any advantage in 2025 against censors?
Any clarifications, corrections, or updated perspectives would be greatly appreciated, especially if others are also evaluating fallback options with a similar threat model.
P.S. @wkrp Apologies in advance if this isn’t the right place for this kind of question. Feel free to close or remove if it's out of scope.
REALITY + XHTTP + VLESS CDN + XHTTP + VLESS-enc
Alternative Plan: https://github.com/cmliu/edgetunnel https://github.com/6Kmfi6HP/EDtunnel https://github.com/yonggekkk/Cloudflare-vless-trojan (Possible violation of Cloudflare's TOS)
CDN IP Speedtest: https://github.com/XIU2/CloudflareSpeedTest
- the (in)famous tls-in-tls issue makes it blockable by TLS fingerprint analysis
In Diwen Xue's TLS classifier test in 2023, Naiveproxy was already low in the true positive rate ranking due to required use of multiplexing. Following Diwen's suggestion I added some undocumented logic for traffic shaping to reduce TLS-over-TLS behaviors but I have no experimental evidence showing its effects either way so I would not specify what it is or claim its effects.
But in general I don't think you would gain a better understanding of structural viability at the protocol level because censorship-circumvention systems don't operate purely at the protocol level, and the protocol design itself is a poor indication of its survival in real world conditions, or even a single point of failure prone to concentrated adversarial research. You would fare better if you gain better adaptation for structural diversity of the circumvention systems.
@RememberOurPromise Thanks for the concrete pointers, planning to implement both paths (REALITY + XHTTP + VLESS and CDN + XHTTP + VLESS-enc) as my baseline.
@klzgrad, many thanks for the helpful clarification and for mentioning the traffic shaping you added to reduce TLS-over-TLS signals. Xue’s work indeed showed notably lower TPR for NaïveProxy (hence, strikethrough the tls-in-tls mention in my earlier comment). If possible, it would be great to document at a high level the intent and mechanisms behind that shaping (and perhaps refresh the wiki tab on the repo, which seems somewhat outdated) to assist newcomers in understanding these mitigations.
Yet, I'd like to highlight the insights from Xue et al.’s NDSS ’25 paper "The Discriminative Power of Cross-layer RTTs in Fingerprinting Proxy Traffic": However, multiplexing might also introduce new types of fingerprints. For one, multiplexed flows tend to live longer and carry more packets compared to non-multiplexed flows. For example, the median number of request-response pairs in multiplexed flows is higher than over 97% of all flows observed from the ISP, which already makes them outliers and more conspicuous. But even in comparisons with this narrow 3%, multiplexed proxy flows could still be differentiated: since multiplexing interleaves packets from different streams, our estimation of application-layer RTT using correlation may not converge, adding a layer of variability which, paradoxically, might itself be fingerprintable. As shown in Figure 13 in Appendix, the sequence of estimated RTT diff from multiplexed proxy flows exhibit wider confidence intervals compared to that of ISP traffic.
Overall, NaïveProxy remains an excellent and solid tool. Many thanks for your continued hard work and care in developing and maintaining these mitigations.
Quick clarification on my position: my earlier wording around ‘structural viability’ wasn’t ideal, and I’m only a couple of weeks into studying this topic, but the approach @RPRX seems to have emphasized in recent threads feels right to me. Not underestimating the opponent’s capabilities, what they can or can’t do at time t due to non-technical factors like hardware cost, political agendas, social and economical consequences of blocking*, or regional fragmentation. If a protocol or tool has known identifiers or flaws that make it detectable, even if not currently blocked, it should be treated as fallible until it is patched or redesigned. Similarly, if it was blocked previously and hasn’t changed substantially since, it remains vulnerable. This doesn’t mean it’s unusable, only that it should be used with clear eyes and alongside diverse fallback options. We know, for instance, that SS might sometimes pass under certain conditions (region, timing, political agenda, clean IPs), but that does not mean it is fundamentally robust. *I do get that nowadays strategies often blends plausible cover with raising the cost and false-positive risk of broad blocking but this is only possible if the design is sound.
Regarding diversity, I strongly value having multiple approaches available, as reflected by the openness and range of this issue’s discussion. I was genuinely surprised and a bit overwhelmed by the variety of existing protocols and tactics out there. This reinforces my desire to maintain several fallback options beyond my primary choices to improve resilience against evolving censor strategies.