Callbacks for snowflake state
While integrating IPtProxy into onionmasq I was missing a way to receive a callback once snowflake is ready to go. This would help us to avoid errors when starting onionmasq in the unmanaged PT mode while snowflake is actually not ready yet to be used. As for RiseupVPN I've used a hacky workaround by parsing the logs written to the logfile while starting snowflake to get informed about state changes, but I would love to avoid to replicate that while integrating IPtProxy into TorVPN.
@n8fr8 mentioned that you already provide a patch for the snowflake proxy mode - and while I've read in another issue that you dislike maintaining patches - I wonder if there are good ways to get state feedback from IptProxy.
@cohosh, before I start digging into the code without a clue:
- What's your opinion?
- Is there already something which would be easy to expose in IPtProxy?
- If not, would you be willing to provide such a mechanism?
My understanding is that ready here means that the Snowflake client has started listening for incoming SOCKS connections on the supplied port.
There are a few options here. It is possible to add a simple callback to the patch. Alternatively, we do have some Snowflake event listeners that can be used for this purpose. Using the events library would require just a slight modification to your Snowflake client patch, and I could provide an example of what that would look like.
However, we're definitely getting to the point now where it would be easier to call the Snowflake client as a library rather than maintaining a patch. If you don't mind waiting a few days, I can give an example of what it would look like to remove the necessity of a patch for Snowflake (and Lyrebird). It would reduce the amount of code and I think it could be really nice to maintain.
Looking very much forward to the latter!
We should make progress on this soon. If @cyBerta just needs a simple status callback, can we provide that?
Here is the rewrite I put together quickly last week that cleans up a bunch of the functionality and will make it easier to add a callback: https://github.com/cohosh/IPtProxy/tree/library-refactor It's close to done, I just need to add back in proxy support and write some documentation. I'll open an MR for this by the end of the week with a full write up and explanation.
If you want a faster solution, you'll need to modify your patch files, because what you need a callback for is determining when you've started listening for incoming SOCKS connections which is already code in the patch. It can be quite simple to do this. This is what I'd recommend:
Add another input for the callback to the Start function:
+func Start(port *int, iceServersCommas, brokerURL, frontDomainsCommas, ampCacheURL, sqsQueueURL, sqsCredsStr, logFilename *string, logToStateDir, keepLocalAddresses, unsafeLogging *bool, max *int, onReady func()) {
And then call that function once you've called pt.ListenSocks:
@@ -270,11 +246,12 @@ func main() {
switch methodName {
case "snowflake":
// TODO: Be able to recover when SOCKS dies.
- ln, err := pt.ListenSocks("tcp", "127.0.0.1:0")
+ ln, err := pt.ListenSocks("tcp", net.JoinHostPort("127.0.0.1", strconv.Itoa(*port)))
if err != nil {
pt.CmethodError(methodName, err.Error())
break
}
+ onReady()
I just realized that the bindings for gomobile might not support callback functions. In that case, I don't know what the fastest way to support this currently is.
Here is the promised IPtProxy refactor: https://github.com/tladesignz/IPtProxy/pull/61
It's a lot of API changes, so no rush in looking it over and I really would like your thoughts and feedback on it.
In the meantime, I've learned how to do callbacks with the gomobile bindings, and I'm happy to advise on a short term fix for this issue now that I've figured out how to do it properly. Just let me know if you want further input from me here :)
I just realized that the bindings for gomobile might not support callback functions. In that case, I don't know what the fastest way to support this currently is.
I believe we have Go->JNI callback functions already in IPtProxy
I just realized that the bindings for gomobile might not support callback functions. In that case, I don't know what the fastest way to support this currently is.
I believe we have Go->JNI callback functions already in IPtProxy
Yeah, after posting that I saw them in the proxy patch and figured out how to implement them: https://github.com/tladesignz/IPtProxy/blob/master/snowflake.patch#L112
Ok. Thanks to @cohosh, we have a refined version of IPtProxy now, which removes the dependency on patches and has an improved interface:
https://github.com/tladesignz/IPtProxy/tree/refactor
The only thing which is still missing is the OnClientConnected callback in Snowflake Proxy, but @cohosh is already in the process of getting that into Snowflake: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/merge_requests/424
As soon as that is available, I'll release an update to IPtProxy!
While this rewrite will not contain a callback for transport states, this should become unnecessary, as @cohosh explains:
The way this refactor is currently written,
Startwon't return untilpt.ListenSocksis called, which is what was causing the problems that prompted the above issue. Before,snowflakeclient.Startwas called in a goroutine, so the function could return before the SOCKS listener was ready to accept connections.
@cyBerta, please let me know, if this is not sufficient for your needs.
@cohosh, please let me know, when a new version of Snowflake with the OnClientConnected callback is available!
While this rewrite will not contain a callback for transport states, this should become unnecessary, as @cohosh explains:
The way this refactor is currently written,
Startwon't return untilpt.ListenSocksis called, which is what was causing the problems that prompted the above issue. Before,snowflakeclient.Startwas called in a goroutine, so the function could return before the SOCKS listener was ready to accept connections.
Yep! There is an exception here, which is that if there was a failure that caused an early return from Start, there will not be a listener. But, this isn't a matter of waiting. If there is a failure to connect, it's because there was a fundamental problem with starting the transport. This can also be checked with a call to Port(), which will return 0 if the transport was not successfully started. This is where returning an error or error code (from either Start or Port) might be useful to callers.
started. This is where returning an error or error code (from either
StartorPort) might be useful to callers.
Ahaha. You reminded me of a TODO there... :-)
Ok, I added returning errors to Start.
That's actually handled pretty well by gomobile: In the JVM, its translated to a throw, in Objective-C, it's a BOOL return value plus an optional error pointer argument and in Swift that is translated to a throw also. Nice!
https://github.com/tladesignz/IPtProxy/commit/22496af173500675784d82fb552c47a2944d272f
I also fixed some crashes: https://github.com/tladesignz/IPtProxy/commit/c8c7e67720d24fc2555330271484c068cb5838e6
So, if Start returns, you'll have a pretty good chance, that the transport will start.
Of course there can be more errors, but these are happening in coroutines, so aren't easily reportable, except via the log file.
I wonder if it would make sense to report them in some callback? 🤔
This looks great! I could see the case for something like a OnTransportStopped callback that will execute when the copy loop ends and the transport is no longer listening for incoming SOCKS connections.
@cohosh, I see you released Snowflake 2.10.0 which contains the Snowflake Proxy EventOnProxyClientConnected. This was the last building block I needed to release a new version of IPtProxy.
However, I'm completely confused on how to use it. Can you clarify?
@cohosh, I see you released Snowflake 2.10.0 which contains the Snowflake Proxy
EventOnProxyClientConnected. This was the last building block I needed to release a new version of IPtProxy.However, I'm completely confused on how to use it. Can you clarify?
Yes, this should be the last piece :) I'm working on a commit to show how to use it now. Should I open a PR to the refactor branch?
Here it is: https://github.com/tladesignz/IPtProxy/pull/63
Oh, my. I'm sorry, I just came to the same conclusion... :-/ Sorry for making you do double work. At least, it shows me, that I was on the right path: https://github.com/tladesignz/IPtProxy/commit/5847df1da0c68d0bef124a7645cc626a436156a3
Can you have a look at this change: https://github.com/tladesignz/IPtProxy/commit/8a706ac0b679ec94bcb5ca69d9078a9165e937ef
If you don't have any objections, I think we're good to release!
Improved the check as per your suggested solution: https://github.com/tladesignz/IPtProxy/commit/8a191d4c75f977004599a8afa93fad8143f7a960
Guess I was a little too optimistic:
Tried it out with Onion Browser.
This is what happens with webtunnel bridges:
DIR} Delaying directory fetches: No running bridges
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:17.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
Nov 12 17:17:17.000 [notice] {CONFIG} Bridge 'GordonFreeman' has both an IPv4 and an IPv6 address. Will prefer using its IPv6 address ([2001:db8:2a61:dea:a9e1:2127:6f2c:5680]:443) based on the configured Bridge address.
Nov 12 17:17:17.000 [notice] {DIR} new bridge descriptor 'GordonFreeman' (fresh): $76CAF380CD98B4640E2F2F4DD09BB8348D271C31~GordonFreeman [a8o87HqP5jliDUGhtzLruBuAAu+T1+eu78yPq4cEHtg] at 167.172.166.227 and [2001:db8:2a61:dea:a9e1:2127:6f2c:5680]
Nov 12 17:17:18.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:18.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:18.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:18.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:18.000 [notice] {CIRC} Our circuit 0 (id: 247) died due to an invalid selected path, purpose Unlinked conflux circuit. This may be a torrc configuration issue, or a bug.
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:18.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
Nov 12 17:17:19.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:19.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:19.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:19.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:19.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
Nov 12 17:17:20.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:20.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:20.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
Nov 12 17:17:20.000 [notice] {CIRC} Failed to find node for hop #1 of our path. Discarding this circuit.
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:20.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:21.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:22.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:23.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:24.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:26.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:28.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:30.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:33.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:41.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
LOG SEVERITY=error MESSAGE="Error parsing args: "
Nov 12 17:17:59.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 ID=<none> RSA_ID=F44A64FCD676CE7845A88DE19B24EBCF0863AC34 ("general SOCKS server failure")
This happens with Snowflake:
Nov 12 17:21:33.000 [notice] {GUARD} Switching to guard context "default" (was using "bridges")
Nov 12 17:21:33.000 [notice] {GUARD} Switching to guard context "bridges" (was using "default")
Nov 12 17:21:33.000 [notice] {DIR} Delaying directory fetches: No running bridges
Nov 12 17:21:33.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.3:80 ID=<none> RSA_ID=2B280B23E1107BB62ABFC40DDCC8824814F80A72 ("general SOCKS server failure")
Nov 12 17:21:33.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.4:80 ID=<none> RSA_ID=8838024498816A039FCBBAB14E6F40A0843051FA ("general SOCKS server failure")
Nov 12 17:21:34.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.3:80 ID=<none> RSA_ID=2B280B23E1107BB62ABFC40DDCC8824814F80A72 ("general SOCKS server failure")
Nov 12 17:21:34.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.4:80 ID=<none> RSA_ID=8838024498816A039FCBBAB14E6F40A0843051FA ("general SOCKS server failure")
Nov 12 17:21:35.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.3:80 ID=<none> RSA_ID=2B280B23E1107BB62ABFC40DDCC8824814F80A72 ("general SOCKS server failure")
Nov 12 17:21:35.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.4:80 ID=<none> RSA_ID=8838024498816A039FCBBAB14E6F40A0843051FA ("general SOCKS server failure")
Nov 12 17:21:37.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.3:80 ID=<none> RSA_ID=2B280B23E1107BB62ABFC40DDCC8824814F80A72 ("general SOCKS server failure")
Nov 12 17:21:37.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.4:80 ID=<none> RSA_ID=8838024498816A039FCBBAB14E6F40A0843051FA ("general SOCKS server failure")
Nov 12 17:21:38.000 [warn] {NET} Proxy Client: unable to connect OR connection (handshaking (proxy)) with 192.0.2.3:80 ID=<none> RSA_ID=2B280B23E1107BB62ABFC40DDCC8824814F80A72 ("general SOCKS server failure")
Obfs4 and Meek seem to work. Way to go.
Improved the check as per your suggested solution: 8a191d4
This looks great! Nice catch on the nil check, I forgot to do that.
This is what happens with webtunnel bridges:
Hmm, can you share the webtunnel bridge line? It shows an arg processing error which is referring to the SOCKS args.
Hmm, can you share the webtunnel bridge line? It shows an arg processing error which is referring to the SOCKS args.
I got them from https://bridges.torproject.org/bridges?transport=webtunnel
webtunnel [2001:db8:8b15:5361:79c0:7bbf:dd0e:5fc3]:443 F44A64FCD676CE7845A88DE19B24EBCF0863AC34 url=https://nextsphere.space/liiloz3ol4Mah3GohGae ver=0.0.1
webtunnel [2001:db8:2a61:dea:a9e1:2127:6f2c:5680]:443 76CAF380CD98B4640E2F2F4DD09BB8348D271C31 url=https://headhumping.jumpingcrab.com/teqzL2Muf7pnzw2qhQ3d ver=0.0.1
The ipt.logshows this:
2024/11/18 11:00:58 [NOTICE]: Launched transport: webtunnel
2024/11/18 11:00:59 Lookup error for host nextsphere.space: lookup nextsphere.space: no such host
2024/11/18 11:00:59 [ERROR]: Error parsing PT args:
2024/11/18 11:00:59 Lookup error for host nextsphere.space: lookup nextsphere.space: no such host
2024/11/18 11:00:59 [ERROR]: Error parsing PT args:
2024/11/18 11:01:00 Lookup error for host nextsphere.space: lookup nextsphere.space: no such host
2024/11/18 11:01:00 [ERROR]: Error parsing PT args:
2024/11/18 11:01:01 Lookup error for host nextsphere.space: lookup nextsphere.space: no such host
2024/11/18 11:01:01 [ERROR]: Error parsing PT args:
2024/11/18 11:01:02 Lookup error for host nextsphere.space: lookup nextsphere.space: no such host
2024/11/18 11:01:02 [ERROR]: Error parsing PT args:
2024/11/18 11:01:03 Lookup error for host nextsphere.space: lookup nextsphere.space: no such host
2024/11/18 11:01:03 [ERROR]: Error parsing PT args:
Aha. I could fix the Snowflake issue with 61fff98daf3eae6192d014414458178c8726d034:
2024/11/18 11:07:07 [NOTICE]: Launched transport: snowflake
2024/11/18 11:07:08 [ERROR]: Error parsing PT args: Invalid SOCKS arg: max=
Aha. Got new Webtunnel bridges:
webtunnel [2001:db8:c6df:6352:42d4:24f5:c473:ce20]:443 8B4D2E25BCD1B2314DE90066B1A76D06EC27CC8A url=https://syncvault.space/goochieph7ohW3cheiGe ver=0.0.1
webtunnel [2001:db8:ed78:c452:ced0:4bde:3f97:7936]:443 12CDCBAD16B1526D63D37F61201FCE83BDCA8B75 url=https://front-eu1.webdeliveryonl.click/7ScDHGDaBsqHtxB1B9GJ2sB7 ver=0.0.1
The latter one created this problem:
2024/11/18 11:21:06 [NOTICE]: Launched transport: webtunnel
2024/11/18 11:21:07 [ERROR]: Error dialing PT: tls: failed to verify certificate: x509: “webdeliveryonl.click” certificate is not trusted
2024/11/18 11:21:08 [ERROR]: Error dialing PT: tls: failed to verify certificate: x509: “webdeliveryonl.click” certificate is not trusted
2024/11/18 11:21:09 [ERROR]: Error dialing PT: tls: failed to verify certificate: x509: “webdeliveryonl.click” certificate is not trusted
2024/11/18 11:21:10 [ERROR]: Error dialing PT: tls: failed to verify certificate: x509: “webdeliveryonl.click” certificate is not trusted
When I removed that, it suddenly worked.
Seems, Webtunnel is still very experimental and the bridges are unstable.
Very weird. On first start, it seems another code path is followed. Couldn't figure out, how that happens, though.
On second starts, addExtraArgs is finally called, and that didn't work, too great, yet.
I fixed it: 96b0233e3ad38b06c4f53038e48a8ec9de317e6b
Then suddenly I see a lot of Snowflake output in ipt.log:
--- Starting Snowflake Client ---
2024/11/18 12:40:08 Using ICE servers:
2024/11/18 12:40:08 url: stun:stun.voipgate.com:3478
2024/11/18 12:40:08 url: stun:stun.dus.net:3478
2024/11/18 12:40:08 url: stun:stun.sonetel.net:3478
2024/11/18 12:40:08 url: stun:stun.l.google.com:19302
2024/11/18 12:40:08 url: stun:stun.voys.nl:3478
2024/11/18 12:40:08 Rendezvous using Broker at: https://1098762253.rsc.cdn77.org/
2024/11/18 12:40:08 Domain fronting using a randomly selected domain from: [github.githubassets.com www.phpmyadmin.net www.cdn77.com]
2024/11/18 12:40:08 Using ICE servers:
2024/11/18 12:40:08 url: stun:stun.bluesip.net:3478
2024/11/18 12:40:08 url: stun:stun.sonetel.com:3478
2024/11/18 12:40:08 url: stun:stun.voipgate.com:3478
2024/11/18 12:40:08 url: stun:stun.voys.nl:3478
2024/11/18 12:40:08 url: stun:stun.uls.co.za:3478
2024/11/18 12:40:08 Rendezvous using Broker at: https://1098762253.rsc.cdn77.org/
2024/11/18 12:40:08 Domain fronting using a randomly selected domain from: [github.githubassets.com www.phpmyadmin.net www.cdn77.com]
2024/11/18 12:40:08 ---- SnowflakeConn: begin collecting snowflakes ---
2024/11/18 12:40:08 ---- SnowflakeConn: starting a new session ---
2024/11/18 12:40:08 ---- SnowflakeConn: begin collecting snowflakes ---
2024/11/18 12:40:08 ---- SnowflakeConn: starting a new session ---
2024/11/18 12:40:08 redialing on same connection
2024/11/18 12:40:08 WebRTC: Collecting a new Snowflake. Currently at [0/1]
2024/11/18 12:40:08 ---- SnowflakeConn: begin stream 3 ---
2024/11/18 12:40:08 WebRTC: Collecting a new Snowflake. Currently at [0/1]
2024/11/18 12:40:08 snowflake-0e5457c7d8456caa connecting...
2024/11/18 12:40:08 redialing on same connection
2024/11/18 12:40:08 snowflake-983390c9156469d2 connecting...
2024/11/18 12:40:08 ---- SnowflakeConn: begin stream 3 ---
2024/11/18 12:40:08 WebRTC: DataChannel created
2024/11/18 12:40:08 WebRTC: DataChannel created
2024/11/18 12:40:08 WebRTC: Created offer
2024/11/18 12:40:08 WebRTC: Created offer
2024/11/18 12:40:08 WebRTC: Set local description
2024/11/18 12:40:08 WebRTC: Set local description
2024/11/18 12:40:08 NAT Type: restricted
And I get a crash here: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/lyrebird/-/blob/main/transports/snowflake/snowflake.go?ref_type=heads#L24
Any ideas, @cohosh?
Ok. The diverged execution path on start was a (really ugly) bug in Onion Browser. Now, that I finally am actually using Snowflake, I could fix the above shown crash with this: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/lyrebird/-/merge_requests/68
@cohosh, can you merge my MR or do something equivalent and release again?
Then I'll finally release a new version of IPtProxy and @cyBerta can have what they need.
Okay cool, I opened a slightly different fix for the snowflake issue in lyrebird. We'll get that merged and have a new release in the next few days. Thanks for tracking that down.
Good to see that you got some of the webtunnel bridge lines working. I wouldn't say it's experimental, but it might be the case that some of the bridges are not properly configured. I've opened an issue for our bridge distributor to try and weed out faulty bridge lines: https://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/248
@cohosh, can you merge my MR or do something equivalent and release again?
This has been merged :) do you need a release or is having merged it enough?