k3sup
k3sup copied to clipboard
K3s picks the wrong interface when used with Vagrant
I've setup a 2 machine vagrant cluster. The problem is that the VMs have multiple interfaces and k3s picks the wrong one - unless instructed differently.
Expected Behaviour
When I ask to join it should pick the correct address either automatically by checking the --ip=
argument or let me specify the interface to use manually.
Current Behaviour
k3sup install --ip=192.168.100.101 --user=vagrant
export KUBECONFIG=/Users/foo/kubeconfig
kubectl config set-context default
kubectl get node -o wide
k3sup join --ip=192.168.100.102 --user=vagrant \
--server-user=vagrant --server-ip=192.168.100.101
Gives
Feb 06 13:39:19 node2 k3s[2114]: time="2021-02-06T13:39:19.962640467Z" level=error msg="Failed to connect to proxy" error="dial tcp 10.0.2.15:6443: connect: connection refused"
Feb 06 13:39:19 node2 k3s[2114]: time="2021-02-06T13:39:19.962782997Z" level=error msg="Remotedialer proxy error" error="dial tcp 10.0.2.15:6443: connect: connection refused"
# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:8d:c0:4d brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85716sec preferred_lft 85716sec
inet6 fe80::a00:27ff:fe8d:c04d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:5c:26:4a brd ff:ff:ff:ff:ff:ff
inet 192.168.100.102/24 brd 192.168.100.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe5c:264a/64 scope link
valid_lft forever preferred_lft forever
% kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 30m v1.19.7+k3s1 10.0.2.15 <none> Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 containerd://1.4.3-k3s1
Related issues:
- https://github.com/k3s-io/k3s/issues/1523
- https://github.com/hashicorp/vagrant/issues/6456
Possible Solution
One workaround could to change (the order of) the interfaces on the host but I don't think this really qualifies as a solution.
I assume k3s just needs to get passed the current combination of arguments:
- --bind-address
- --advertise-address
- --flannel-iface
- --node-ip
- --node-external-ip
Which may or may not need to be passed to k3sup.
Steps to Reproduce (for bugs)
Vagrant.configure("2") do |config|
config.vm.box = "debian/buster64"
config.ssh.forward_agent = true
config.vm.provider "virtualbox" do |v|
v.memory = "1024"
v.cpus = 1
end
config.vm.define "node1" do |n|
n.vm.hostname = "node1"
n.vm.network "private_network", ip: "192.168.100.101", hostname: true
end
config.vm.define "node2" do |n|
n.vm.hostname = "node2"
n.vm.network "private_network", ip: "192.168.100.102", hostname: true
end
config.vm.provision "shell", inline: "apt-get update && apt-get -y install curl"
end
k3sup install --ip=192.168.100.101 --user=vagrant
k3sup join --ip=192.168.100.102 --user=vagrant \
--server-user=vagrant --server-ip=192.168.100.101
% kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 30m v1.19.7+k3s1 10.0.2.15 <none> Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 containerd://1.4.3-k3s1
Notice how it shows the wrong internal IP.
Context
I am trying to use k3s to setup a 2-node test cluster with vagrant.
Your Environment
- What Kubernetes distribution are you using?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7+k3s1", GitCommit:"5a00e38db4c198fb0725a6b709380aed8053d637", GitTreeState:"clean", BuildDate:"2021-01-14T23:09:21Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"
-
What OS or type or VM are you using for your cluster? Where is it hosted? (for
k3sup install/join
): Debian GNU/Linux 10 (buster) on virtualbox via vagrant. -
Operating System and version (e.g. Linux, Windows, MacOS):
vagrant@node2:~$ uname -a
Linux node2 4.19.0-9-amd64 #1 SMP Debian 4.19.118-2 (2020-04-29) x86_64 GNU/Linux
vagrant@node2:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
The host is macOS 10.15.7
"Be part of the solution"
Subject to approval, are you willing to work on a Pull Request for this issue or feature request?
Sure
Thank you for the detailed write-up
I assume k3s just needs to get passed the current combination of arguments:
--bind-address --advertise-address --flannel-iface --node-ip --node-external-ip
I think the word "just" doesn't really work here, that's rather a lot of change for this use-case. I wonder why we haven't seen this yet from any other users? Even I didn't get this on Equinix Metal which comes with 3-4 adapters out of the box.
You can pass extra arguments to the installer script with k3sup join --help
, do you think you can use them for your Vagrant setup?
/set title: K3s picks the wrong interface when used with Vagrant
What's the minimal way to fix this? --node-external-ip=
perhaps? https://github.com/k3s-io/k3s/issues/1523#issuecomment-637844241
/add label: support,enhancement
I think the word "just" doesn't really work here, that's rather a lot of change for this use-case.
That was just the birds eye view with a sprinkle of hope I guess :) Bummer.
I wonder why we haven't seen this yet from any other users? Even I didn't get this on Equinix Metal which comes with 3-4 adapters out of the box.
Now that's rather odd - and even more intriguing. Maybe it's related to the gateway or order of the interfaces?
You can pass extra arguments to the installer script with k3sup join --help, do you think you can use them for your Vagrant setup?
I totally missed that. But seems like --k3s-extra-args string
could be the work around for this case.
I'll give that a try and report back.
Not quite as easy apparently:
k3sup install --ip=192.168.100.101 --user=vagrant \
--k3s-extra-args "--node-external-ip=192.168.100.101"
% kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 2m8s v1.19.7+k3s1 10.0.2.15 192.168.100.101 Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 containerd://1.4.3-k3s1
Somehow
--bind-address
--advertise-address
--flannel-iface
would also make more sense to me.
I made some progress:
k3sup install --ip=192.168.100.101 --user=vagrant --ssh-key=/Users/foo/servers/.vagrant/machines/node1/virtualbox/private_key \
--k3s-extra-args "--flannel-iface=eth1"
This looks OK now. But joining is a problem:
k3sup join --ip=192.168.100.102 --user=vagrant --ssh-key=/Users/foo/servers/.vagrant/machines/node2/virtualbox/private_key \
--k3s-extra-args "--flannel-iface=eth1" \
--server-user=vagrant --server-ip=192.168.100.101
I was expecting a --server-ssh-key=
option but it seems there is no such thing.
That's why I am getting:
Running: k3sup join
Server IP: 192.168.100.101
Error: unable to connect to (server) 192.168.100.101:22 over ssh: ssh: handshake failed: ssh: unable to authenticate, attempted methods [publickey none], no supported methods remain
I have added the output of vagrant ssh-config
to my ~/.ssh/config
.
Host node1
HostName 192.168.100.101
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/foo/servers/.vagrant/machines/node1/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
Host node2
HostName 192.168.100.102
User vagrant
Port 22
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/foo/servers/.vagrant/machines/node2/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
and access works OK
% ssh node1 "echo OK"
OK
% ssh node2 "echo OK"
OK
So how can I set the key for the server connection? This is now slowing drifting into the same direction as https://github.com/alexellis/k3sup/issues/304
It would be great if something like this would just work for this setup:
k3sup install node1 --k3s-extra-args "--flannel-iface=eth1"
k3sup join node2 --server node1 --k3s-extra-args "--flannel-iface=eth1"
Hey,
I had exactly the same problem today. Vagrant with one master and two workers (all Debian Buster). I was able to solve it by starting it like this
k3sup install --ip <MASTER_IP> --user root --k3s-extra-args --flannel-iface=eth1
k3sup join --ip <WORKER_01_IP> --server-ip <MASTER_IP> --user root --k3s-extra-args --flannel-iface=eth1
k3sup join --ip <WORKER_02_IP> --server-ip <MASTER_IP> --user root --k3s-extra-args --flannel-iface=eth1
I hope it helps
I also had this same problem, but with multipass. I finally found an answer from here.
Initially I forced the node-external-ip and node-ip using the extra args command for the server node like so...
k3sup install --ip server_ip--user ubuntu --k3s-extra-args '--node-external-ip server_ip --node-ip server_ip'
k3sup join --user ubuntu --server-ip server_ip --ip agent_ip
Later on I recognized that k3s has an advertise-address option, and that appeared to work as well, again without having to alter calls for joining agents.
k3sup install --ip server_ip--user ubuntu --k3s-extra-args '--advertise-address server_ip'
k3sup join --user ubuntu --server-ip server_ip --ip agent_ip
I ran into the same issue on Vagrant. Would it be sensible to always explicitly set the --advertise-address
? If not, this could be added as a known issue as part of the documentation. Any opinions?
Thanks @procinger. Let's get this closed now, most if not all of the people seeking support there are not sponsors.
/lock