rport
rport copied to clipboard
Clients cannot connect after upgrading server to 0.6.0
Was on server version 0.5.0 on Ubuntu 18. Used the upgrade script to upgrade to 0.6.0. No errors were reported. Now clients cannot connect to the server, citing:
client: Connection error: websocket: bad handshake (Attempt: 10)
Tried rolling back the upgrade by copying the 0.5.0 version back to /usr/local/bin and restarting but the server stops logging after:
client-listener: Listening on 0.0.0.0:80...
Shut down the service, moved the existing /usr/local/bin/rportd file and /var/lib/rport folder to my home folder. I exploded the contents of the backup file the update script made and restored the contents to their appropriate locations. Started the service but it still hung up at the same spot in the log. I diffed the rportd.conf and rportd.conf.save files and saw that the update made some changes. I restored the rportd.conf.save file and restarted the service again. It still hung up at the same spot in the logging. Netstat showed no connection listening on 80 so I ran the setcap CAP_NET_BIND_SERVICE=+eip /usr/local/bin/rportd command I found in the upgrade script and now the service starts.
@kevs-oc
Just to check what has caused the trouble:
Download the 0.6.0 binary again to a temp folder and just execute ./rportd --version
and ldd ./rportd
.
Ubuntu 18.04 might not fulfill the glibc dependencies of rport 0.6.0 version.
/usr/local/bin/rportd-0.6.0 --version returned "version 0.6.0" (I renamed the file to keep it as a reference) ldd rportd-0.6.0 linux-vdso.so.1 (0x00007ffca4545000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f29dd56b000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f29dd548000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f29dd356000) /lib64/ld-linux-x86-64.so.2 (0x00007f29dd57a000)
I rolled back to 0.5.0 and my old clients can connect however when I try to add a new client, it also gets the bad handshake error. Systemctl status shows rport is active/running and the rport.log shows just the handshake errors and ever increasing attempt intervals.
Are the new clients inside the same network as the old ones? The bad handshake error is often caused by transparent proxies or firewalls doing virus scanning. Is there a transparent proxy on the network? It must not be related to the server. Usually, you can connect 0.6.0 clients to older servers.
And I take it back about the OS. I'm actually running on Ubuntu 20.04.3. Sorry about that. One more thing I noticed on the server side. When rportd starts, it must be calling guacd because I see it starting as well. I confirmed that rportd.conf has no reference to it and that the exe I'm running is 0.5.0.
The update to 0.6.0 has a dependency to guacd where 0.5.0 does not have it. You can remove guacd with apt purge
or dpkg
.
But if you are on Ubuntu 20.04 0.6.0 should run flawlessly. ldd
confirmed all decencies are met.
The new client is on the same network as an old client that is able to connect. This is lab network I am testing these new devices on and there is no transparent proxy running.
After upgrading none of the existing clients could connect. The clients were a mix of 0.5.0 and 0.6.0.
After the upgrade, please check
-
systemctl status rportd
. Any errors? -
netstat -tulpen
. Is rportd listening on the expected ports? -
/var/log/rport/rportd.log
. Any errors? - If rportd does not come up, check for config syntax errors by executing in the foreground with
su - rport -s /bin/bash -c "rportd -c /etc/rport/rportd.conf"
- Increase the log level in
rportd.conf
to debug, restart rportd and inspect the logs.
Before I try the upgrade again I would like to get back to where I was. I booted up some older devices that I had on the shelf that had rport client installed already. They were able to connect with no issues. I've tried connecting 2 new clients on identical hardware as the previously configured one and neither are able to connect. Both cite the bad handshake error. I am positive I am not running a transparent proxy. I even tried http://whatismyip.network/detect-isp-proxy-tool/ to see if they could detect that I was behind a proxy and they confirmed I am not.
Do you have any other suggestions on why the new clients are unable to connect?
I compared the rport.conf files from the working and non-working devices and found the server urls to be different. The newly installed clients are trying to reach my server using "https://servername.domain.com" while the previously installed clients that can connect are using "servername.domain.com:80". I changed a newly installed client to the same format and restarted the client. Now the client logs that there is a connection error: client id "blah" is already in use. The id referenced does not match the client name/password. Any thoughts?
I had a similar issue, changed the "https" to "http" in the client's rport.conf, with that workaround clients can connect. Seems to be a protocol mismatch.
@otto404-114 I did that and was able to connect but got the client id in use error. The new conf file that comes down has a new param called use_system_id which defaults to true. Since I imaged these appliances using the same OS image, they are getting the same machineid, which gets blocked by rportd. I set that param to false, and enabled the id value in the conf file as well as the new use_hostname param and name values. Now the new client can connect.
@thorstenkramm I thought I had restored everything to 0.5.0 but did I miss the default conf file? Or does that come down from the provisioning server? If so, can I flag it to use the older version?
Yes. Clients connect over HTTP (without S). Encryption happens on application layer, not on the transport layer.
@kevs-oc
The backup executed before the update process contains the old server config. The update will introduce new config options the old version doesn't understand. If you roll back to 0.5.0 you must restore the rportd.conf
.
There is no "default conf". It's all in the rportd.conf
. Internal defaults are compiled into the binary.
I double-checked my backup file and it does not contain a backup of the rportd.conf file. According to the upgrade script: `# Create a backup FOLDERS="/usr/local/bin/rportd /var/lib/rport /var/log/rport" throw_info "Creating a backup of your RPort data. This can take a while." throw_debug "${FOLDERS} will be backed up." BACKUP_FILE=/var/backups/rportd-$(date +%Y%m%d-%H%M%S).tar.gz require_pv if is_available pv; then EST_SIZE=$(du -sb /var/lib/rport | awk '{print $1}') tar cf - $FOLDERS | pv -s $EST_SIZE | gzip > $BACKUP_FILE else tar cvzf $BACKUP_FILE $FOLDERS fi
throw_info "A backup has been created in $BACKUP_FILE" ` that file is not backed up.
I may be confused so please bear me out. Are you saying the conf file that the clients get is born from the rportd.conf file on the server? If so I did not see client section nor many of the params that the client has that the server does not.
@kevs-oc Damn, you are right. The upgrade does not include the rportd.conf in the backup. I'll update the script right now.
The client configuration is not related to the server configuration. The client configuration is "born" from the sample configuration for a specific version.
Client credentials are the only client-specific data. They are stored in a separated file, typically /var/lib/rport/client-auth.json