testssl.sh
testssl.sh copied to clipboard
[BUG / possible BUG] openssl binary/binaries that default to :4433 timeout after b6b5a67
After b6b5a67 which removed DNS timeouts for WSL I see with the FreeBSD binary long scan times.
Upon investigation I discovered that openssl s_client for FreeBSD now uses the defaults, which is not :0 but :4433. This might happen on other platforms too
Command line / docker command to reproduce Under FreeBSD, using 3.1dev at 35ddd91 it takes around 12 minutes to complete a scan
./testssl.sh www.verweg.com
###########################################################
testssl.sh 3.1dev from https://testssl.sh/dev/
(35ddd91 2021-12-21 10:54:58 -- )
This program is free software. Distribution and
modification under GPLv2 permitted.
USAGE w/o ANY WARRANTY. USE IT AT YOUR OWN RISK!
Please file bugs @ https://testssl.sh/bugs/
###########################################################
Using "OpenSSL 1.0.2-chacha (1.0.2k-dev)" [~183 ciphers]
on helium:./bin/openssl.FreeBSD.amd64
(built: "Jan 18 18:46:10 2019", platform: "BSD-x86_64")
...
Done 2022-01-04 17:19:53 [0679s] -->> 94.142.245.8:443 (www.verweg.com) <<--
Most time is spend in detecting openssl s_client options where no -connect is specified and the command tries to connect to :4433
Expected behavior
I've kind of fixed in my fork partially undoing b6b5a67, but using -connect 0:0 where applicable.
https://github.com/rvstaveren/testssl.sh/commit/d82085d35b7c6dc74e2742dd7bae47e985cfa32f
Maybe reverting b6b5a67 but substituting NXCONNECT=${NXCONNECT:-invalid.} with NXCONNECT=${NXCONNECT:-0:0} would be good too.
With this, under FreeBSD, completion of a scan takes around a normal ±80 seconds
./testssl.sh www.verweg.com
###########################################################
testssl.sh 3.1dev from https://testssl.sh/dev/
(d82085d 2022-01-01 12:20:21 -- )
This program is free software. Distribution and
modification under GPLv2 permitted.
USAGE w/o ANY WARRANTY. USE IT AT YOUR OWN RISK!
Please file bugs @ https://testssl.sh/bugs/ ###########################################################
Using "OpenSSL 1.0.2-chacha (1.0.2k-dev)" [~183 ciphers]
on helium:./bin/openssl.FreeBSD.amd64
(built: "Jan 18 18:46:10 2019", platform: "BSD-x86_64")
...
Done 2022-01-04 17:06:52 [0074s] -->> 94.142.245.8:443 (www.verweg.com) <<--
Your system (please complete the following information):
- OS:
FreeBSD 13.0-RELEASE-p5 - Platform:
FreeBSD 13.0-RELEASE-p4 amd64 - Version:
testssl.sh 3.1dev from https://testssl.sh/dev/ (d82085d 2022-01-01 12:20:21 -- ) - Version if running from git repo:
commit 35ddd918135821b8df80c5dcb15f75eea8e7b506 - OpenSSL:
./bin/openssl.FreeBSD.amd64
Additional context
Rebuilding openssl from /usr/ports/security/openssl-unsafe showed the same behaviour, and the 9.x statically linked binary works fine
- thx for reporting
- argh :-) -- I thought this was solved for all times :(
- first thoughts follow
Atm I do not get why this causes huge delays for you. In general: if it does open a connection we're limited what the OS does. When in your case (whether it's FreeBSD or your special setting) does a DROP on any port this is going to be difficult.
Then -- and not that it helps -- this can't happen when an option which is being tested for doesn't exist. So -tls1_3 won't open a connection with the supplied binary. But -ssl3 does.
I bottom line: maybe a connection to 0:0 works for you, it does also for me. But not sure whether it does for everybody else.
Thanks!
Afaik neither bin/openssl.FreeBSD.amd64 nor /usr/local/openssl-unsafe/bin/openssl (from pkg install openssl-unsafe) supports -tls1_3 but lets say openssl is a complex program. I’m going to see what it does on various Linuxes and a windows 10 vm…
Just for my information: Why does your system apparently drop packages on port 4433, do you know why?
Afaik neither bin/openssl.FreeBSD.amd64 nor /usr/local/openssl-unsafe/bin/openssl (from pkg install openssl-unsafe) supports -tls1_3
I guess both come from this repo.. When you use the OS bulitin I am pretty sure it does.
Hi @rvstaveren, @drwetter,
I just tried to come up with a solution to this problem and instead discovered another problem. My thought was, if the checks are slow because packets are being dropped, why not use $OPENSSL s_server to start up a TLS server in the background and then have the checks connect to that server? As a test I started a TLS server that listened on port 4433 and then ran testssl.sh. The result was that testssl.sh froze on the first successful connection to the server. In my case that was line 19438:
$OPENSSL s_client -no_ssl2 2>&1 | grep -aiq "unknown option" || HAS_NO_SSL2=true
My guess is that this could be fixed by changing this (and similar lines) to:
$OPENSSL s_client -no_ssl2 < /dev/null 2>&1 | grep -aiq "unknown option" || HAS_NO_SSL2=true
However, another way to get testssl.sh to freeze is to run it in parallel with nc -k -l 4433. In this case testssl.sh freezes on the first attempt to connect to 4433:
$OPENSSL s_client -ssl2 2>&1 | grep -aiq "unknown option" || HAS_SSL2=true
Adding /dev/null to the command does nothing to fix this.
I don't know if my original idea of setting up a background TLS server with $OPENSSL s_server (along with adding < /dev/null to the test lines) would solve the various problems. However, even if it would work, I have no idea how to implement it. So, unfortunately, I may have just uncovered more problems and no solutions. :-(
Hi @dcooper16 ,
yes, I believe < /dev/null would help for problem no 2. Problem no 1: It's a tough architecture decision to start a server as any port we pick we might run into a collision at some time. Either the port is taken or e.g. a network or SELinux policy prevents that.
For #3 (netcat) as an educated guess: maybe it freezes because on the application layer "openssl s_client --> netcat" they don't speak the same protocol.
yes, I believe < /dev/null would help for problem no 2.
Okay, I can create a PR for this.
Problem no 1: It's a tough architecture decision to start a server as any port we pick we might run into a collision at some time. Either the port is taken or e.g. a network or SELinux policy prevents that.
Hard coding in a specific port value would not work, as you say. With OpenSSL 1.1.1 and later I can call $OPENSSL s_server with -accept 0 and it will choose a port and print out which port number it is using, but that's not a very portable solution. OpenSSL 1.0.2-chacha seems to accept -accept 0, but doesn't print out the port number it is using, and other versions of OpenSSL and LibreSSL just fail. As far as I can tell, there is no way in Bash to request a port number, which could then be safely provided to $OPENSSL s_server.
For #3 (netcat) as an educated guess: maybe it freezes because on the application layer "openssl s_client --> netcat" they don't speak the same protocol.
Yes, it seems that netcat receives the ClientHello and acknowledges it (at the TCP layer), and that is where communication stops. I would guess that openssl s_client never sees the ack (since that is handled at a lower layer) so it is just waiting for a response (any response) to its ClientHello message. Perhaps if the server end was really an application running a different protocol, some error message would be sent in response (or the TCP connection would be closed) and that would allow openssl s_client to complete, even if the response seemed like garbage to a TLS client. So, it's probably highly unlikely that anyone running testssl.sh would have a process like this listening on port 4433, but it is another example of something that could potentially go wrong.
Okay, I can create a PR for this.
That's one step in the right direction and would be appreciated. I believe this would also help the 3.0 branch.
As far as I can tell, there is no way in Bash to request a port number, which could then be safely provided to $OPENSSL s_server
Yes, and there are other constraints too.
In general I believe the best approach is to start with a reasonable default port but make this configurable somehow. As a second step persist this info on the client in a config file. The latter was an idea which I had a longer while ago, mostly because if we do this smart enough it also can reduce the start up time.
Oh unfortunately that doesn’t solve the case on my side :(
➜ testssl.sh git:(3.1dev-rvs) time ./bin/openssl.FreeBSD.amd64 s_client -ssl3 </dev/null 2>&1 | grep -aiq "unknown option"
./bin/openssl.FreeBSD.amd64 s_client -ssl3 < /dev/null 2>&1
0.01s user 0.00s system 0% cpu 3312 Kb mem 3256 Kb max RSS 2:30.03 total
grep --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn,.idea,.tox} -aiq
0.00s user 0.00s system 0% cpu 296 Kb mem 2584 Kb max RSS 2:30.02 total
Doing a dummy connect lets it fall straight through…
➜ testssl.sh git:(3.1dev-rvs) time ./bin/openssl.FreeBSD.amd64 s_client -ssl3 -connect 0:0 </dev/null 2>&1 | grep -aiq "unknown option"
./bin/openssl.FreeBSD.amd64 s_client -ssl3 -connect 0:0 < /dev/null 2>&1
0.00s user 0.01s system 95% cpu 7343 Kb mem 4380 Kb max RSS 0.007 totalgrep --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn,.idea,.tox} -aiq
0.00s user 0.00s system 14% cpu 0 Kb mem 0 Kb max RSS 0.007 total
➜ testssl.sh git:(3.1dev-rvs) git:(3.1dev-rvs|…1
This might be also a case of unintentional self harm in where I use the blackhole(4) sysctl to not send a RST on a non existing port. This got previously unnoticed due to the NXCONNECT logic that was present.
Though I can imagine other people using similar hardening it should not make testssl.sh more complex than it already is. Either that NXCONNECT logic can be restored, but with an “illegal” address/ port combination, or just close the issue.
Hi Ruben,
this was intended to solve only ONE problem, see above.
For the blackhole thingy. Okay, thanks. I guess it would not harm to make the port configurable with 4433 being the default. For now I'll leave this open.
Cheers, Dirk
After sleeping over this, I no longer want to pursue this as testssl.sh should not be bothered by the edge case of advanced firewalling on the system running the script. I’m ok with closing this.
For what is worth, I'm observing strange delays in my testing too and this issue looks similar. My setup is
drwetter/testssl.sh:latestcontainer- I have to use
--openssl=/usr/bin/openssland that makes the scanning time to jump up to 3 minutes - without
--opensslthe scan finishes in < 10s (but I'm not able to use it due to... appears to support TLS 1.3 ONLY. You better use --openssl=<path_to_openssl_supporting_TLS_1.3>and that I cannot bypass it without being interactive)