mtg icon indicating copy to clipboard operation
mtg copied to clipboard

Issue with curl and "cannot parse client hello"

Open tarampampam opened this issue 1 year ago • 0 comments

Hi there, and thanks for your project!

But I have a bizarre (for me) issue, that can be simply reproduced:

$ docker run --rm nineseconds/mtg:2.1.6 generate-secret 127.0.0.2.nip.io
7q6BM33vVLydrR4EXDkfMAkxMjcuMC4wLjIubmlwLmlv
$ docker run --rm -p "443:443" nineseconds/mtg:2.1.6 simple-run -d -p 443 0.0.0.0:443 7q6BM33vVLydrR4EXDkfMAkxMjcuMC4wLjIubmlwLmlv
$ curl -k -v https://127.0.0.1:443/ # or https://127.0.0.2.nip.io/ - it does not matter
*   Trying 127.0.0.1:443...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* Operation timed out after 60000 milliseconds with 0 out of 0 bytes received
* Closing connection 0
curl: (28) Operation timed out after 60000 milliseconds with 0 out of 0 bytes received

In the docker logs:

{"level":"debug","configuration":{"debug":true,"allowFallbackOnUnknownDc":true,"secret":"7q6BM33vVLydrR4EXDkfMAkxMjcuMC4wLjIubmlwLmlv","bindTo":"0.0.0.0:443","preferIp":"prefer-ipv6","domainFrontingPort":443,"tolerateTimeSkewness":"","concurrency":8192,"defense":{"antiReplay":{"enabled":true,"maxSize":"1mib","errorRate":0},"blocklist":{"enabled":false,"downloadConcurrency":0,"urls":null,"updateEach":""},"allowlist":{"enabled":false,"downloadConcurrency":0,"urls":null,"updateEach":""}},"network":{"timeout":{"tcp":"10s","http":"10s","idle":"10s"},"dohIp":"9.9.9.9","proxies":null},"stats":{"statsd":{"enabled":false,"address":"","metricPrefix":"","tagFormat":""},"prometheus":{"enabled":false,"bindTo":"","httpPath":"","metricPrefix":""}}}
,"logger":"","timestamp":1660290772743,"message":"configuration"}
{"level":"info","logger":"allowlist.ipblocklist.firehol","timestamp":1660290772744,"message":"ip list was updated"}
{"level":"info","client-ip":"172.17.0.1","stream-id":"-Q7t0-z6u-e-Lrr-d6VMbw","logger":"proxy","timestamp":1660290775558,"message":"Stream has been started"}
{"level":"info","logger":"proxy","error":"bad digest","timestamp":1660290775563,"message":"cannot parse client hello"}
{"level":"info","client-ip":"127.0.0.1","stream-id":"rI0H4p-cb2EGM3WrRi5_TA","logger":"proxy","timestamp":1660290775808,"message":"Stream has been started"}
{"level":"info","logger":"proxy","error":"bad digest","timestamp":1660290775808,"message":"cannot parse client hello"}
{"level":"info","client-ip":"127.0.0.1","stream-id":"dDrYwVY1Rpb25cUgBWUdRA","logger":"proxy","timestamp":1660290775809,"message":"Stream has been started"}
{"level":"info","logger":"proxy","error":"bad digest","timestamp":1660290775809,"message":"cannot parse client hello"}
{"level":"info","client-ip":"127.0.0.1","stream-id":"Y2OVgRfJdeNdiSIoORpGQQ","logger":"proxy","timestamp":1660290775809,"message":"Stream has been started"}
{"level":"info","logger":"proxy","error":"bad digest","timestamp":1660290775809,"message":"cannot parse client hello"}
{"level":"info","client-ip":"127.0.0.1","stream-id":"uvg-P8uMKMJx6IRhV2d3Lw","logger":"proxy","timestamp":1660290775810,"message":"Stream has been started"}
{"level":"info","logger":"proxy","error":"bad digest","timestamp":1660290775810,"message":"cannot parse client hello"}
{"level":"info","client-ip":"127.0.0.1","stream-id":"S_mGV5f4msq3LRX-ftmdNQ","logger":"proxy","timestamp":1660290775810,"message":"Stream has been started"}
# ...many-many times...
{"level":"info","ip":"127.0.0.1","logger":"proxy","timestamp":1660290776624,"message":"connection was concurrency limited"}
{"level":"debug","client-ip":"127.0.0.1","stream-id":"MXsjpwrEkGPn3_wnXFK7sQ","logger":"proxy.domain-fronting","timestamp":1660291418199,"message":"telegram -> client has been finished"}
{"level":"debug","client-ip":"127.0.0.1","stream-id":"u1NM2Jz-7WeDC8j-HirKNA","logger":"proxy.domain-fronting","timestamp":1660291418199,"message":"client -> telegram has been finished"}
{"level":"info","client-ip":"127.0.0.1","stream-id":"MXsjpwrEkGPn3_wnXFK7sQ","logger":"proxy","timestamp":1660291418199,"message":"Stream has been finished"}
{"level":"debug","client-ip":"127.0.0.1","stream-id":"u1NM2Jz-7WeDC8j-HirKNA","logger":"proxy.domain-fronting","timestamp":1660291418199,"message":"telegram -> client has been finished"}
{"level":"debug","client-ip":"127.0.0.1","stream-id":"sy4XmCL0VDzrc3zkPMEppA","logger":"proxy.domain-fronting","timestamp":1660291418199,"message":"client -> telegram has been finished"}
{"level":"info","client-ip":"127.0.0.1","stream-id":"u1NM2Jz-7WeDC8j-HirKNA","logger":"proxy","timestamp":1660291418199,"message":"Stream has been finished"}
# ...many-many times...

It looks like some kind of looping... And in my production, together with flooding in logs (CPU and memory actively used in this time) - containers orchestrator kills the container with ran mdg. So, any who knows a sub-domain with mtg can kill it using one simple HTTP (curl) request or by opening a domain in the browser.

Telegram can communicate with mtg without any issues at the same time.

Expected behavior: simple 404 HTTP response with an invalid TLS "Page not found".

tarampampam avatar Aug 12 '22 08:08 tarampampam