sliver icon indicating copy to clipboard operation
sliver copied to clipboard

Portfwd tls Does not work properly

Open a3sroot opened this issue 2 years ago • 6 comments

Describe the bug A clear and concise description of what the bug is.

To Reproduce Steps to reproduce the behavior: implant/sliver/handlers/tunnel_handlers/portfwd_handler.go:66 When using the portfwd function, if the other party is an https site and supports the http2.0 protocol. Communication has a certain probability of

➜  ~ curl https://127.0.0.1:8081 -H "host: xxx.com" -vv -k
*   Trying 127.0.0.1:8081...
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=xx; O=xxx.com Co.,Ltd; CN=*.xxx.com
*  issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=Secure Site CA G2
*  SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle )
> GET / HTTP/2
> Host: xxx.com
> user-agent: curl/7.79.1
> accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* TLSv1.2 (IN), TLS alert, bad record mac (532):
* LibreSSL SSL_read: error:1404C3FC:SSL routines:ST_OK:sslv3 alert bad record mac, errno 0
* Failed receiving HTTP2 data
* stopped the pause stream!
* Connection #0 to host 127.0.0.1 left intact

Expected behavior A clear and concise description of what you expected to happen.

The http request is made normally.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. MacOS]
  • Version [e.g. v0.0.5]

Additional context Add any other context about the problem here.

a3sroot avatar Nov 18 '22 09:11 a3sroot

Implant log

2022/11/18 17:30:35 sliver.go:282: [recv] tunHandler 84
2022/11/18 17:30:35 portfwd_handler.go:43: [portfwd] Dialing -> xxx.com:443
2022/11/18 17:30:40 portfwd_handler.go:64: [portfwd] Configuring keep alive
2022/11/18 17:30:40 portfwd_handler.go:73: [portfwd] Creating tcp tunnel
2022/11/18 17:30:40 sliver.go:282: [recv] tunHandler 22
2022/11/18 17:30:40 data_handler.go:27: [tunnel] Cache tunnel 5236470694624572672 (seq: 0)
2022/11/18 17:30:40 data_handler.go:39: [tunnel] Write 301 bytes to tunnel 5236470694624572672 (read seq: 0)
2022/11/18 17:30:40 data_handler.go:49: [message just received] 5236470694624572672
2022/11/18 17:30:40 tunnel_writer.go:34: [tunnelWriter] tunnel 5236470694624572672 Write 4096 bytes (write seq: 0) ack: 1, data: 4111
2022/11/18 17:30:40 tunnel_writer.go:34: [tunnelWriter] tunnel 5236470694624572672 Write 1327 bytes (write seq: 1) ack: 1, data: 1344
2022/11/18 17:30:40 sliver.go:282: [recv] tunHandler 22
2022/11/18 17:30:40 data_handler.go:27: [tunnel] Cache tunnel 5236470694624572672 (seq: 1)
2022/11/18 17:30:40 data_handler.go:39: [tunnel] Write 93 bytes to tunnel 5236470694624572672 (read seq: 1)
2022/11/18 17:30:40 data_handler.go:49: [message just received] 5236470694624572672
2022/11/18 17:30:40 tunnel_writer.go:34: [tunnelWriter] tunnel 5236470694624572672 Write 120 bytes (write seq: 2) ack: 2, data: 136
2022/11/18 17:30:40 sliver.go:282: [recv] tunHandler 22
2022/11/18 17:30:40 sliver.go:282: [recv] tunHandler 22
2022/11/18 17:30:40 data_handler.go:27: [tunnel] Cache tunnel 5236470694624572672 (seq: 2)
2022/11/18 17:30:40 data_handler.go:39: [tunnel] Write 151 bytes to tunnel 5236470694624572672 (read seq: 2)
2022/11/18 17:30:40 data_handler.go:27: [tunnel] Cache tunnel 5236470694624572672 (seq: 3)
2022/11/18 17:30:40 data_handler.go:49: [message just received] 5236470694624572672
2022/11/18 17:30:40 data_handler.go:39: [tunnel] Write 110 bytes to tunnel 5236470694624572672 (read seq: 3)
2022/11/18 17:30:40 data_handler.go:39: [tunnel] Write 110 bytes to tunnel 5236470694624572672 (read seq: 3)
2022/11/18 17:30:40 data_handler.go:49: [message just received] 5236470694624572672
2022/11/18 17:30:40 data_handler.go:49: [message just received] 5236470694624572672
2022/11/18 17:30:40 tunnel_writer.go:34: [tunnelWriter] tunnel 5236470694624572672 Write 69 bytes (write seq: 3) ack: 5, data: 85
2022/11/18 17:30:40 portfwd_handler.go:136: [tunnel] Tunnel done, wrote 5612 bytes
2022/11/18 17:30:40 portfwd_handler.go:98: [portfwd] Closing tunnel 5236470694624572672 (%!s(<nil>))
2022/11/18 17:30:41 sliver.go:282: [recv] tunHandler 22
2022/11/18 17:30:41 sliver.go:282: [recv] tunHandler 22
2022/11/18 17:30:41 data_handler.go:77: [tunnel] Received data for nil tunnel 5236470694624572672
2022/11/18 17:30:41 data_handler.go:77: [tunnel] Received data for nil tunnel 5236470694624572672
2022/11/18 17:30:41 data_handler.go:78: [message just transfered] Data:"\x15\x03\x03\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x07FRr\x13A\x12\xe2\x92f\xb4\x8a(\x03F.\x9a|K" Sequence:5 TunnelID:5236470694624572672 SessionID:"6610159e-86da-411d-b624-273da427919b"
2022/11/18 17:30:41 data_handler.go:78: [message just transfered] Data:"\x17\x03\x03\x00%\x00\x00\x00\x00\x00\x00\x00\x06E\xb4\xbe#2\xc0v=\x0bՔ\x88\x83M8\x81m=\xe8\xd8\xe2\xdb\xe7\x0b\xc6©\x89o" Sequence:4 TunnelID:5236470694624572672 SessionID:"6610159e-86da-411d-b624-273da427919b"

a3sroot avatar Nov 18 '22 09:11 a3sroot

There is a bug in the portfwd logic, which accumulates a lot of tunnel requests that don't get sent out.There is a 50 percent chance of triggering.

a3sroot avatar Nov 18 '22 09:11 a3sroot

@moloch-- Is there any relief? Deeply troubled.

a3sroot avatar Nov 19 '22 05:11 a3sroot

We'll try to incorporate a refactor in v1.6.x but for now porfwd is not 100% reliable.

moloch-- avatar Nov 19 '22 20:11 moloch--

1.6.x scheduling is probably how to plan ah, I will synchronize the websocket communication method up next week.

a3sroot avatar Nov 20 '22 09:11 a3sroot

It feels like a communication problem with tls v3 .

a3sroot avatar Nov 21 '22 07:11 a3sroot