workspaces-issues
workspaces-issues copied to clipboard
[Bug] - Kasm Proxy swaps networks when deploying container on ipvlan
Existing Resources
- [x ] Please search the existing issues for related problems
- [x ] Consult the product documentation : Docs
- [x ] Consult the FAQ : FAQ
- [x ] Consult the Troubleshooting Guide : Guide
- [x ] Reviewed existing training videos: Youtube
Describe the bug Upon deployment of a new (in my case Ubuntu) container on a separate docker L2 IPVLAN network from the Kasm web interface , the kasm_proxy container adds itself as a member of the network and becomes inaccessible.
To Reproduce Steps to reproduce the behavior:
- Have a single-server Kasm instance with two interfaces, eth0 being management and eth1 being a trunk.
- Deploy a new L2 IPVLAN docker network using eth1 as parent and with relevant network configs named z_internalvlan_100.
- Download and configure a container (Ubuntu Desktop) to use only the z_internalvlan_100 network for deployment.
- Attempt to deploy said Ubuntu Desktop container.
Expected behavior I am expecting a presentation of the Ubuntu Desktop container to the browser window.
Screenshots If applicable, add screenshots to help explain your problem. ... I don't think they would help explain, if you would like to see something specific, let me know.
Workspaces Version e.g Version 1.14
Workspaces Installation Method Single-server install on a Ubuntu 22.04 VM on top of Proxmox.
Client Browser (please complete the following information):
- OS: Windows 11
- Browser: Firefox
- Version: 118.0.2
Workspace Server Information (please provide the output of the following commands):
uname -a
Linux tfsup-kasm01 5.15.0-87-generic #97-Ubuntu SMP Mon Oct 2 21:09:21 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.3 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.3 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy
sudo docker info
`Client: Docker Engine - Community Version: 24.0.6 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.11.2 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.5.0 Path: /usr/local/lib/docker/cli-plugins/docker-compose
Server: Containers: 8 Running: 8 Paused: 0 Stopped: 0 Images: 15 Server Version: 24.0.6 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 runc version: v1.1.9-0-gccaecfc init version: de40ad0 Security Options: apparmor seccomp Profile: builtin cgroupns Kernel Version: 5.15.0-87-generic Operating System: Ubuntu 22.04.3 LTS OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.751GiB Name: tfsup-kasm01 ID: d7b3cc30-65d9-466d-a6d5-be4bb791fcdf Docker Root Dir: /var/lib/docker Debug Mode: false Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false`
sudo docker ps | grep kasm
`541e6cd97ab4 kasmweb/nginx:1.25.1 "/docker-entrypoint.…" 2 hours ago Up 2 hours 80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp kasm_proxy
0fa34df34147 kasmweb/share:1.14.0 "/bin/sh -c '/usr/bi…" 45 hours ago Up 2 hours (healthy) 8182/tcp kasm_share
a62fc2c769bc kasmweb/agent:1.14.0 "/bin/sh -c '/usr/bi…" 45 hours ago Up 2 hours (healthy) 4444/tcp kasm_agent
3050e91cb6fa redis:5-alpine "docker-entrypoint.s…" 45 hours ago Up 2 hours 6379/tcp kasm_redis
8aa44b34f47a kasmweb/manager:1.14.0 "/bin/sh -c '/usr/bi…" 45 hours ago Up 2 hours (healthy) 8181/tcp kasm_manager
f991a1f9af71 kasmweb/kasm-guac:1.14.0 "/dockerentrypoint.sh" 45 hours ago Up 2 hours (healthy) kasm_guac
be9f52dafd73 kasmweb/api:1.14.0 "/bin/sh -c '/usr/bi…" 46 hours ago Up 2 hours (healthy) 8080/tcp kasm_api
98008b06a362 postgres:12-alpine "docker-entrypoint.s…" 46 hours ago Up 2 hours (healthy) 5432/tcp kasm_db`
Additional context
Additional commands for visibility:
- ip -a
`1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 06:da:22:3b:3e:e2 brd ff:ff:ff:ff:ff:ff altname enp0s18 inet 10.10.0.142/24 brd 10.10.0.255 scope global ens18 valid_lft forever preferred_lft forever inet6 fe80::4da:22ff:fe3b:3ee2/64 scope link valid_lft forever preferred_lft forever
3: ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 5a:6f:75:82:52:c5 brd ff:ff:ff:ff:ff:ff altname enp0s19 inet6 fe80::586f:75ff:fe82:52c5/64 scope link valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:d0:03:2b:a4 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:d0ff:fe03:2ba4/64 scope link valid_lft forever preferred_lft forever
5: br-984259418b25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:45:e9:ef:14 brd ff:ff:ff:ff:ff:ff inet 172.18.0.1/16 brd 172.18.255.255 scope global br-984259418b25 valid_lft forever preferred_lft forever inet6 fe80::42:45ff:fee9:ef14/64 scope link valid_lft forever preferred_lft forever
39: ens19.100@ens19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 5a:6f:75:82:52:c5 brd ff:ff:ff:ff:ff:ff inet6 fe80::586f:75ff:fe82:52c5/64 scope link valid_lft forever preferred_lft forever
41: vethc4d9e4c@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-984259418b25 state UP group default link/ether 9a:91:6b:30:ea:35 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::9891:6bff:fe30:ea35/64 scope link valid_lft forever preferred_lft forever
43: vetheaf598a@if42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-984259418b25 state UP group default link/ether ae:5e:33:70:92:ea brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::ac5e:33ff:fe70:92ea/64 scope link valid_lft forever preferred_lft forever
45: vethfdbeeca@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-984259418b25 state UP group default link/ether 76:94:83:99:8d:31 brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::7494:83ff:fe99:8d31/64 scope link valid_lft forever preferred_lft forever
47: veth17c3e72@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-984259418b25 state UP group default link/ether c2:7a:14:55:0d:9e brd ff:ff:ff:ff:ff:ff link-netnsid 3 inet6 fe80::c07a:14ff:fe55:d9e/64 scope link valid_lft forever preferred_lft forever
49: veth6eb17eb@if48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-984259418b25 state UP group default link/ether da:d7:4b:8e:db:2d brd ff:ff:ff:ff:ff:ff link-netnsid 5 inet6 fe80::d8d7:4bff:fe8e:db2d/64 scope link valid_lft forever preferred_lft forever
51: veth68e6587@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-984259418b25 state UP group default link/ether 9a:8b:a8:6d:07:49 brd ff:ff:ff:ff:ff:ff link-netnsid 4 inet6 fe80::988b:a8ff:fe6d:749/64 scope link valid_lft forever preferred_lft forever
53: veth1e89423@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-984259418b25 state UP group default link/ether 16:59:0d:44:4f:38 brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::1459:dff:fe44:4f38/64 scope link valid_lft forever preferred_lft forever
55: veth9f7b408@if54: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-984259418b25 state UP group default link/ether 06:89:ad:e0:43:9d brd ff:ff:ff:ff:ff:ff link-netnsid 7 inet6 fe80::489:adff:fee0:439d/64 scope link valid_lft forever preferred_lft forever`
- docker network ls
NETWORK ID NAME DRIVER SCOPE 317223d6c11b bridge bridge local c169879f2232 host host local 984259418b25 kasm_default_network bridge local f3e2254d51b8 none null local cc3089673f1b z_internalvlan_100 ipvlan local
- docker network inspect z_internalvlan_100
BEFORE KASM UBUNTU CONTAINER DEPLOYMENT
[ { "Name": "z_internalvlan_100", "Id": "cc3089673f1bd98056774bfb898d5d2eda5a417b6c6ebfb0f2e5ad4b6edd7f00", "Created": "2023-10-22T21:22:20.753620369Z", "Scope": "local", "Driver": "ipvlan", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.1.0/24", "IPRange": "10.0.1.192/30", "Gateway": "10.0.1.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": { "ipvlan_mode": "l2", "parent": "ens19.100" }, "Labels": {} } ]
AFTER CONTAINER DEPLOYMENT:
[ { "Name": "z_internalvlan_100", "Id": "cc3089673f1bd98056774bfb898d5d2eda5a417b6c6ebfb0f2e5ad4b6edd7f00", "Created": "2023-10-22T21:22:20.753620369Z", "Scope": "local", "Driver": "ipvlan", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.1.0/24", "IPRange": "10.0.1.192/30", "Gateway": "10.0.1.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "541e6cd97ab4b79722be42f6c6e79b3fdb18746e10762ca622fee3629bbb3940": { "Name": "kasm_proxy", "EndpointID": "95cd6e69d07ee39352558a7e5e99133260922d7ef4b0e2ce9fa75d68a894166e", "MacAddress": "", "IPv4Address": "10.0.1.193/24", "IPv6Address": "" }, "6f0e156a2519e7a2b319b436675c9570bd1deedea65b83e727b74688876c0979": { "Name": "REDACTED", "EndpointID": "f41a2baf3190919387434cf1958cbe32f56de6846b747b6d00475e638f4e5c5b", "MacAddress": "", "IPv4Address": "10.0.1.192/24", "IPv6Address": "" } }, "Options": { "ipvlan_mode": "l2", "parent": "ens19.100" }, "Labels": {} } ]
- docker network inspect kasm_default_network
[ { "Name": "kasm_default_network", "Id": "984259418b2505431d1c5e2f9c3e195fe59682598d840e5c5499946e4975409d", "Created": "2023-10-21T01:11:59.893946146Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "0fa34df3414782b3bab330d3461ea657091c2c52d2e296848543f3b404739574": { "Name": "kasm_share", "EndpointID": "7224bf4da0fff77ea77e54a6363b58dc507daf21ca43c29c1d1db2e94e03eab3", "MacAddress": "02:42:ac:12:00:07", "IPv4Address": "172.18.0.7/16", "IPv6Address": "" }, "3050e91cb6faf154e85cc6f3a572c1901a46be3adba0d593c6deeb3a68e18a5b": { "Name": "kasm_redis", "EndpointID": "6b250716c112abb34cdcfd66c1d13d74ae1a2c4bd270da15f29564edb0b77814", "MacAddress": "02:42:ac:12:00:02", "IPv4Address": "172.18.0.2/16", "IPv6Address": "" }, "541e6cd97ab4b79722be42f6c6e79b3fdb18746e10762ca622fee3629bbb3940": { "Name": "kasm_proxy", "EndpointID": "45b5391bff69c148269b940ba5aa06c60d1fb306966579ec6faf63f07cbe076e", "MacAddress": "02:42:ac:12:00:09", "IPv4Address": "172.18.0.9/16", "IPv6Address": "" }, "8aa44b34f47ae7aac80e0ee2a491d24086a2a767b55aa7e591945ef26c3387cb": { "Name": "kasm_manager", "EndpointID": "2203fe261c3c14bd6a40703d89201a915cf52a498d397203fe80746d460bf996", "MacAddress": "02:42:ac:12:00:05", "IPv4Address": "172.18.0.5/16", "IPv6Address": "" }, "98008b06a3629ce94fb4c694d74fc879b2bd373e83a6c170461251de5de11e86": { "Name": "kasm_db", "EndpointID": "6fc1f0b674a18903d7b5aa8acca0fc32f0ef3f3d92edecbf94763ed9528964e4", "MacAddress": "02:42:ac:12:00:04", "IPv4Address": "172.18.0.4/16", "IPv6Address": "" }, "a62fc2c769bcea82b09cce97d779bc66c7ae8b18020058f13ee66c10634bfdc0": { "Name": "kasm_agent", "EndpointID": "a21a8be00976c87a9c7368dfe936133762162343a85cfbc8e2e6a327c0e8255a", "MacAddress": "02:42:ac:12:00:08", "IPv4Address": "172.18.0.8/16", "IPv6Address": "" }, "be9f52dafd73b5dc03c59345a8e65503bc7a4c181a02bf8df80ec25e196e5a4f": { "Name": "kasm_api", "EndpointID": "e866672c24157f383d5de5c0a5174d3787ced9bd2181e29c08f5650ffe40dbdf", "MacAddress": "02:42:ac:12:00:06", "IPv4Address": "172.18.0.6/16", "IPv6Address": "" }, "f991a1f9af714835ce664c954455af117de0323be5ebe718d5646d5dc69d3142": { "Name": "kasm_guac", "EndpointID": "812c51ffe1dc8a9feb6cc18dca2cba05c22bdf3ac0151494690e0a4bc24d518b", "MacAddress": "02:42:ac:12:00:03", "IPv4Address": "172.18.0.3/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} } ]
Additional Details:
Upon deploying the container, there is some time that the browser looks like it's caching information about the session created. It will error out with a gateway timeout banner error message, and then swap to a "you're not connected" webpage message. Port 443 is not reachable at the management IP address. Restarting services will not restore connectivity.
The exact command to create my network was:
docker network create -d ipvlan --subnet=10.0.1.0/24 --gateway=10.0.1.1 --ip-range=10.0.1.192/30 -o ipvlan_mode=l2 -o parent=ens19.100 z_internalvlan_100
Actions Taken
One command immediately fixed the issue for me. EDIT: this was not a permanent fix. It needs to be re-run every time a container on this network is deployed.
docker network disconnect z_internalvlan_100 kasm_proxy
Further notes:
There was some documentation on ipvlan configs where docker would select a default route based on the name of the network, I don't think this is relevant, but I'm not sure. Link: https://kasmweb.com/docs/latest/how_to/restrict_to_docker_network.html
Jumping in to a shell on the server and running exec -it $container bash, followed by curl ifconfig.me would would just fine, so it seems the deployed kasms are healthy.
I do have a firewall the defines the management ip in one zone and the l2 vlan in another. I have permitted all traffic between these ranges and the management IP. Further, on the trunk interface connected to the VM, I have all vlans accessible.
I'm not sure if this is an issue with Docker being strange or if there is a deployment option that I simply missed. Any suggestions would be excellent. This is a vanilla install on a test server in my homelab, if you want to throw things at me to see if they stick, I'm fine with that.
We do have an article specifically on ipvlan, the one you linked is about docker networks in general. https://kasmweb.com/docs/latest/how_to/ipvlan.html
Generally speaking, the kasm_proxy container needs to be in all docker networks so that it can proxy the desktop connection to KAsmVNC running inside the container, without exposing ports on containers to the host. For IPVLAN networks, the containers are exposed directly to the network and thus, it may not be necessary that the kasm_proxy is on the same docker network, depending on your internal networking. If you removed it from that docker network, nginx would then proxy to the container's IP, which would result in packets going backup to the router, then back down to the same most on the trunk interface. This is sub-par for network flow, but perhaps desired, depending on your security.
As for why kasm_proxy can't speak to the containers directly when it is attached to the IPVLAN docker network, I can't say for sure. We have the setup covered in the above document running in client environments, but it is complicated and there are networking configurations that are simply outside of our purview. For example, a private VLAN setup on a Cisco device would likely cause this, or perhaps host based iptables or routing settings. I would exec into the kasm_proxy container as root and attempt to do basic network troubleshooting from there. Try to curl the KasmVNC service in the workspaces container when kasm_proxy is attached to that docker network. As root you should be able to install additioanl tools that may help you do network troubleshooting.
I've reviewed the code and there is no way to currently tell the kasm_agent not to attach kasm_proxy to a container's network, and this is done for each created session. If there were a compelling case, we could add an option to disable this behavior, but I there must be something else going on that is interfering with proper behavior.
Alrighty, so more information here. I may have stumbled on the problem.
For context, the computer reaching out is in the 10.0.1.0/24 range, which is my internal vlan100. This is the same VLAN that I configured the docker ipvlan to connect to, and in doing so, the nginx proxy builds out a new routing table with one problem...
docker exec -it kasm_proxy route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.18.0.1 0.0.0.0 UG 0 0 0 eth0 10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
I'm not a networking person, but as I understand it, the kasm_proxy container is a member of the kasm_default_network and whatever other networks are configured with containers managed by Kasm. When deploying a container to 10.0.1.X/24 (and by extension, adding the kasm_proxy to 10.0.1.X+1/24), a new route entry is created, preventing traffic from 10.0.1.0/24 from being routed back by the container, as the gateway is 0.0.0.0. This is why from my computer, which is 10.0.1.10, SSH and ICMP is unaffected (routing to server), but HTTPS is completely borked (routed to docker network, then unable to find a path back).
Running curl from another server or workstation outside of my vlan 100 returns the webpage as normal. I believe this problem to be a configuration error on my side, and I don't really see a use case for deploying containers on a vlan that a member workstation is already a member of. As such, I think we can consider this to be not-a-bug and close this report.
Thanks for your help!