gluetun
gluetun copied to clipboard
Bug: qBittorrent stops listening to the open port after the gluetun VPN restarts internally
Is this urgent?
No
Host OS
Ubuntu 22.04
CPU arch
x86_64
VPN service provider
Custom
What are you using to run the container
docker-compose
What is the version of Gluetun
Running version latest built on 2022-12-31T17:50:58.654Z (commit ea40b84)
What's the problem 🤔
Everything works as expected when qBittorrent and gluetun containers are freshly started. The qBittorrent is listening on the open port and it is reachable via the internet. However, when gluetun runs for a longer period of time and for some reason the VPN stops working for a brief time, trigerring gluetun's internal VPN restart, the open port in qBittorrent is no longer reachable.
What I found out was that by changing the open listening port in qBittorrent WebUI settings to some random port, saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable. Or just restarting the qBittorrent container without changing anything also worked.
Is there anything gluetun can do to prevent this? Is this solely qBittorrent's bug? Unfortunately, I have no idea.
Thanks!
Share your logs
INFO [healthcheck] program has been unhealthy for 36s: restarting VPN
INFO [vpn] stopping
INFO [firewall] removing allowed port xxxxxx...
INFO [vpn] starting
INFO [firewall] allowing VPN connection...
INFO [wireguard] Using available kernelspace implementation
INFO [wireguard] Connecting to yyyyyyyyy:yyyyy
INFO [wireguard] Wireguard is up
INFO [firewall] setting allowed input port xxxxxx through interface tun0...
INFO [healthcheck] healthy!
Share your configuration
No response
Exactly the same is happening to me as well. The workaround @Gylesie mentioned works for me too, but unfortunately it is not too nice when one wants to rely on the raspberry just working without needing any input.
Maybe my docker-compose.yml
will help with debugging/reproducing the error:
version: "3"
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
environment:
- VPN_SERVICE_PROVIDER=mullvad
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=<redacted>
- WIREGUARD_ADDRESSES=<redacted>
- SERVER_CITIES=<redacted>
- FIREWALL_VPN_INPUT_PORTS=<redacted> # mullvad forwarded port
- PUID=1000
- PGID=1000
ports:
- 8080:8080 # qbittorrent webgui
- <redacted>:<redacted> # mullvad forwarded port
- <redacted>:<redacted>/udp # mullvad forwarded port
restart: unless-stopped
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
network_mode: "service:gluetun"
environment:
- PUID=1000
- PGID=1000
- WEBUI_PORT=8080
volumes:
- <redacted>:/config
- <redacted>:/downloads
depends_on:
gluetun:
condition: service_healthy
restart: unless-stopped
EDIT: The same is happening with deluge too.
EDIT 2: Doesn't seem to happen with transmission
Chiming in that I have the same issue with qbittorrent and gluetun with the hotio image for qbittorrent. @Gylesie's workaround is okay but troublesome when it happens at night.
It might be because there is a listener going through the tunnel, but gluetun destroys that tunnel on an internal vpn restart and re-creates it.
I had the same issue with the http client fetching version info/public ip info from within gluetun, and the fix was to close 'idle connections' for the http client when the tunnel is up again
https://github.com/qdm12/private-internet-access-docker/blob/ab5dbdca9744defe3afbb68d5c0a029a29b0a6a0/internal/vpn/tunnelup.go#L20
A bit weird though, since a server (listener) should still work across vpn restarts (it does work with i.e. the shadowsocks server). Also strange it works with Transmission. But from what you said
saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable
Doing this restarts the listener which is why it works again I would say.
I don't think I can really do something from within Gluetun, you could perhaps have some script reading the logs of Gluetun and restart qbittorrent when a vpn restarts occurs. Not ideal but I cannot think of something better really for now.
Hmm, that's unfortunate. Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.
@qdm12 When the tunnel gets destroyed, does that mean that also the network interface gets destroyed and recreated afterwards?
Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.
yes and no, because this script would likely have to run on the host outside the gluetun container. We could eventually as an option add capabilities for Gluetun to do Docker host operations by bind mounting the docker socket, but that's kinda risky security wise (although it already runs as root + NET_ADMIN capabilities, so maybe why not). Anyway the backlog of more pressing issues is already thick, but let's keep this opened, it would be interesting to explore this more.
In the meantime, feel free to use this script I made, it's not perfect but good enough. Keep it running the whole time on the host system.
#!/bin/bash
# Gluetun monitoring script by Gylesie. More info:
# https://github.com/qdm12/gluetun/issues/1407
######### Config:
gluetun_container_id="gluetun"
qbittorrent_container_id="qbittorrent"
timeout="60"
docker="/usr/bin/docker"
#################################################
log() {
echo "$(date) [INFO] $1"
}
# Wait for the container to be running
while ! "$docker" inspect "$gluetun_container_id" | jq -e '.[0].State.Running' > /dev/null; do
log "Waiting for the container($gluetun_container_id) to be up and running! Sleeping for $timeout seconds..."
sleep "$timeout"
done
# store the start time of the script
start_time=$(date +%s)
# stream the logs and process new lines only
"$docker" logs -t -f "$gluetun_container_id" 2>&1 | while read line; do
# get the timestamp of the log line
log_time=$(date -d "$(echo "$line" | cut -d ' ' -f1)" +%s)
# check if the log line was generated after the script started
if [[ "$log_time" -ge "$start_time" ]]; then
# Check if vpn was restarted
if [[ "$line" =~ "[wireguard] Wireguard is up" ]]; then
# Check if qbittorrent container is running
if "$docker" inspect "$qbittorrent_container_id" | jq -e '.[0].State.Running' > /dev/null; then
log "Restarting qbittorrent!"
"$docker" restart "$qbittorrent_container_id"
else
log "qBittorrent container($qbittorrent_container_id) is not running! Passing..."
fi
fi
fi
done
Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.
yes and no, because this script would likely have to run on the host outside the gluetun container. We could eventually as an option add capabilities for Gluetun to do Docker host operations by bind mounting the docker socket, but that's kinda risky security wise (although it already runs as root + NET_ADMIN capabilities, so maybe why not). Anyway the backlog of more pressing issues is already thick, but let's keep this opened, it would be interesting to explore this more.
I'd imagine it would be possible to have some environment variables for Gluetun which specify the address, port username and password of your qBittorrent instance, then Gluetun could use the qBittorrent web API to change the port and then back whenever the tunnel is restarted. This wouldn't require any special Docker permissions. Obviously not the cleanest solution, however a solution nonetheless.
@Eiqnepm I wasn't aware of such web API can you create a separate issue for this? Definitely something doable!
@Eiqnepm I wasn't aware of such web API can you create a separate issue for this? Definitely something doable!
The API is documented here, I went ahead and created the new issue https://github.com/qdm12/gluetun/issues/1441#issue-1612862391, thanks a bunch for the quick response!
I've gone ahead and made a container portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.
Docker Compose example
version: "3"
services:
gluetun:
cap_add:
- NET_ADMIN
container_name: gluetun
devices:
- /dev/net/tun:/dev/net/tun
environment:
- FIREWALL_VPN_INPUT_PORTS=6881
- OWNED_ONLY=yes
- SERVER_CITIES=Amsterdam
- VPN_SERVICE_PROVIDER=mullvad
- VPN_TYPE=wireguard
- WIREGUARD_ADDRESSES=👀
- WIREGUARD_PRIVATE_KEY=👀
image: qmcgaw/gluetun
ports:
- 8080:8080 # qBittorrent
restart: unless-stopped
volumes:
- ./gluetun:/gluetun
portcheck:
container_name: portcheck
depends_on:
- qbittorrent
environment:
- DIAL_TIMEOUT=5
- QBITTORRENT_PASSWORD=adminadmin
- QBITTORRENT_PORT=6881
- QBITTORRENT_USERNAME=admin
- QBITTORRENT_WEBUI_PORT=8080
- QBITTORRENT_WEBUI_SCHEME=http
- TIMEOUT=300
image: eiqnepm/portcheck
network_mode: service:gluetun
restart: unless-stopped
qbittorrent:
container_name: qbittorrent
environment:
- PGID=1000
- PUID=1000
- TZ=Etc/UTC
- WEBUI_PORT=8080
image: lscr.io/linuxserver/qbittorrent
network_mode: service:gluetun
restart: unless-stopped
volumes:
- ./qbittorrent/config:/config
- ./qbittorrent/downloads:/downloads
Environment variables
Variable | Default | Description |
---|---|---|
QBITTORRENT_PORT |
6881 |
qBittorrent incoming connection port |
QBITTORRENT_WEBUI_PORT |
8080 |
Port of the qBittorrent WebUI |
QBITTORRENT_WEBUI_SCHEME |
http |
Scheme of the qBittorrent WebUI |
QBITTORRENT_USERNAME |
admin |
qBittorrent WebUI username |
QBITTORRENT_PASSWORD |
adminadmin |
qBittorrent WebUI password |
TIMEOUT |
300 |
Time in seconds between each port check |
DIAL_TIMEOUT |
5 |
Time in seconds before the port check is considered incomplete |
I've just updated the container to not rely on the Gluetun HTTP control server for the public IP address of the VPN connection, it now uses the outbound address from within the Gluetun service network to check the qBittorrent incoming port, this also has the added benefit of not needing to query the qBittorrent incoming port from the public IP address of your server.
For anyone that was using this before I made the change, make sure to run the container inside of the Gluetun service network and update the environment variables which have changed.
I recently switched from linuxserver/transmission to linuxserver/qbittorrent and noticed that qbittorrent (working inside the gluetun docker network) stops working after some time. I have been suspecting that is due because gluetun kind of restarts itself for some reason. I am glad to see I am not the only one who has noticed this issue.
The extra container solution is nice but not ideal. I think I will revert to transmission until a proper solution is found out but really appreciate all your efforts. Will keep subscribed for updates.
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.
Thank you for writing this - works great!
For others experiencing this issue, I'm wondering if it would also help to increase the HEALTH_VPN_DURATION_INITIAL
config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.
Is the default setting of 6 seconds too sensitive?
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.Thank you for writing this - works great!
For others experiencing this issue, I'm wondering if it would also help to increase the
HEALTH_VPN_DURATION_INITIAL
config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.Is the default setting of 6 seconds too sensitive?
My pleasure!
After reading the wiki, it seems the healthcheck was primarily created due to the unreliability of OpenVPN connections. Considering I'm using WireGuard which is stateless I've just decided to completely disable the healthcheck feature and see how that goes. With my current knowledge, barring my VPN provider itself going offline, I can't think of a reason why my connection would be interrupted (I guess we'll find out).
While the healthcheck feature cannot be disabled per se, you can just set the HEALTH_TARGET_ADDRESS
to the HEALTH_SERVER_ADDRESS
which defaults to 127.0.0.1:9999
.
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.Thank you for writing this - works great!
For others experiencing this issue, I'm wondering if it would also help to increase the
HEALTH_VPN_DURATION_INITIAL
config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.Is the default setting of 6 seconds too sensitive?
I can confirm that this fixed it for me. I set HEALTH_VPN_DURATION_INITIAL=120s
about two weeks ago and haven't had this problem since.
Comcast hiccups often in my area, so 6 seconds was definitely too aggressive for me
in qBittorent you can go into options and under advanced, and you can lock the network interface to tun0
. this fixed the heath check disconnect/reconnect issue for me months ago as it's an issue with qbit not handling reconnects correctly. I will still probably set the HEALTH_VPN_DURATION_INITIAL=120s
just because I hate seeing a bunch of reconnects in the logs.
Also, someone just posted a bug that tun0 disappeared after the last update, but it hasn't been verified yet.
I can also confirm this. I was having this problem regularly, but locking the network interface to tun0
in qBittorent has also solved it for me.
any chance are you on the latest version and not having the tun0
missing bug? someone pulled yesterday and said the lost it, but they are also having openvpn cert issues, so it's possibly not a valid bug, but a symptom of a different one
I was running 3.32. I've updated to 3.33 and do not have any issues with tun0
. Or are you referring to later git commits? I'm on a Synology NAS (DSM7) as well, but Wireguard to Mulvad. So far everything is fine. I'll keep an eye on the public port issue as ever, but so far tun0
is present and still bound in qBittorrent as expected.
In the meantime, feel free to use this script I made, it's not perfect but good enough. Keep it running the whole time on the host system.
#!/bin/bash # Gluetun monitoring script by Gylesie. More info: # https://github.com/qdm12/gluetun/issues/1407 ######### Config: gluetun_container_id="gluetun" qbittorrent_container_id="qbittorrent" timeout="60" docker="/usr/bin/docker" ################################################# log() { echo "$(date) [INFO] $1" } # Wait for the container to be running while ! "$docker" inspect "$gluetun_container_id" | jq -e '.[0].State.Running' > /dev/null; do log "Waiting for the container($gluetun_container_id) to be up and running! Sleeping for $timeout seconds..." sleep "$timeout" done # store the start time of the script start_time=$(date +%s) # stream the logs and process new lines only "$docker" logs -t -f "$gluetun_container_id" 2>&1 | while read line; do # get the timestamp of the log line log_time=$(date -d "$(echo "$line" | cut -d ' ' -f1)" +%s) # check if the log line was generated after the script started if [[ "$log_time" -ge "$start_time" ]]; then # Check if vpn was restarted if [[ "$line" =~ "[wireguard] Wireguard is up" ]]; then # Check if qbittorrent container is running if "$docker" inspect "$qbittorrent_container_id" | jq -e '.[0].State.Running' > /dev/null; then log "Restarting qbittorrent!" "$docker" restart "$qbittorrent_container_id" else log "qBittorrent container($qbittorrent_container_id) is not running! Passing..." fi fi fi done
I tested this script with an echo instead of restart before actually enabling, and if your gluetun has been running a while and already restarted a few times, it will restart qb just as many times in rapid sequence. I think I will try the longer timeout for the gluetun healthcheck first to avoid the internal reconnects
Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.
AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.
Is an official solution possible? @qdm12
Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.
AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.
Is an official solution possible? @qdm12
The best workaround for now is to use the libtorrentv1 version of qbittorrent. Or switch to transmission. It's an issue with libttorrentv2.
Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.
AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.
Is an official solution possible? @qdm12
If restarting the container is undesirable, you should use https://github.com/qdm12/gluetun/issues/1407#issuecomment-1461582887.
@ksurl Sounds like a downgrade best avoided. Is there a bug reference for the libtorrentv2 issue?
@Eiqnepm Nifty but requires another container, and isn't on the UNRAID app portal. Looking for an official solution within this container. Can you merge the solution with a pull request here?
@ksurl Sounds like a downgrade best avoided. Is there a bug reference for the libtorrentv2 issue?
@Eiqnepm Nifty but requires another container, and isn't on the UNRAID app portal. Looking for an official solution within this container. Can you merge the solution with a pull request here?
I found no other functionality changes with v1. Does unraid not let you use any image from docker hub? You could accomplish the same thing with a cron script to poke the api.
and isn't on the UNRAID app portal
Under apps and then setting, enable additional search results from dockerHub.
The container is very lightweight. It could be implimented into Gluetun, I even made an issue upon request https://github.com/qdm12/gluetun/issues/1441#issue-1612862391, however I don't currently understand the inner workings of Gluetun and don't have the ability to implement the feature myself at this time.
If the maintainer decides this is an issue that Gluetun should resolve first hand, it should not be a very daunting task, considering I managed to get it done with just over two-hundred lines of Go.
If this is a libtorrent issue then a bug should be opened there. I don't think gluetun should add a fix for a third-party issue that already has a simple container workaround.
and isn't on the UNRAID app portal
Under apps and then setting, enable additional search results from dockerHub.
Cool that there is that option however I do not see it.
As it happens... the issue sort of just went away on it's own apparently. There were several days I needed to restart the container but after a recent Gluetun update, the issue seems to have gone away.
Here's how I handle restarting dependent dockers when Gluetun restarts: https://gist.github.com/Snuffy2/1d49250df3a5c8fdb3a24d486df92015
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible. Docker Compose exampleEnvironment variables Variable Default Description
QBITTORRENT_PORT
6881
qBittorrent incoming connection portQBITTORRENT_WEBUI_PORT
8080
Port of the qBittorrent WebUIQBITTORRENT_WEBUI_SCHEME
http
Scheme of the qBittorrent WebUIQBITTORRENT_USERNAME
admin
qBittorrent WebUI usernameQBITTORRENT_PASSWORD
adminadmin
qBittorrent WebUI passwordTIMEOUT
300
Time in seconds between each port checkDIAL_TIMEOUT
5
Time in seconds before the port check is considered incompleteI've just updated the container to not rely on the Gluetun HTTP control server for the public IP address of the VPN connection, it now uses the outbound address from within the Gluetun service network to check the qBittorrent incoming port, this also has the added benefit of not needing to query the qBittorrent incoming port from the public IP address of your server.
For anyone that was using this before I made the change, make sure to run the container inside of the Gluetun service network and update the environment variables which have changed.
@eiqnepm I am bit confused by the portcheck. Does portcheck change the qbt port to a random one and then after sometime change the port back to the original one (the one configured with port forwarding)?