microk8s
microk8s copied to clipboard
Ingress no longer exposing services on port 80
I've installed microk8s v1.21.0 from latest/edge and added the ingress addon, but microk8s isn't listening on port 80.
This certainly used to be possible, as it's been used in various demos. In some discussions about this issue it was suggested that it could be related to reinstalling microk8s (i.e. something isn't removed on uninstall). I ran sudo snap remove --purge microk8s and then sudo snap install microk8s --classic --channel=latest/edge followed by enabling the dns, dashboard, registry, storage and ingress addons.
Hi @mthaddon perhaps the ingress class you were using is nginx it was changed to public.
Can you try that?
@balchua this isn't about what ingress class my workload is using - there's just nothing listening on port 80 at all, so no matter how I configure an ingress resource I wouldn't be able to reach it.
I checked the logs, i saw kubelet complaining about not being able to free up some space.
May 10 12:08:10 tenaya microk8s.daemon-kubelite[2638704]: I0510 12:08:10.071887 2638704 image_gc_manager.go:304] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=98 highThreshold=85 amountToFree=43136729088 lowThreshold=80
May 10 12:08:10 tenaya microk8s.daemon-kubelite[2638704]: E0510 12:08:10.073952 2638704 kubelet.go:1309] "Image garbage collection failed multiple times in a row" err="failed to garbage collect required amount of images. Wanted to free 43136729088 bytes, but freed 0 bytes"
Looking at the file system, i think it is almost full, try freeing up more space.
I've got the same issue but with ~700GB free on my machine here :)
@jnsgruk thats strange, ingress uses host network and we also run a test on ingress. Do you see similar logs in the apiserver or kubelite showing these https://github.com/ubuntu/microk8s/issues/2255#issuecomment-836615556 too?
I've got the same issue but with ~700GB free on my machine here :)
@jnsgruk can you upload an inspection report as well?
I've got the same issue but with ~700GB free on my machine here :)
@jnsgruk can you upload an inspection report as well?
I've freed up a bunch of disk space (18G free now), but am still experiencing the same issue. I'm now using microk8s v1.21.1 fwiw.
Also, I can get it working fine with a fresh install of MicroK8s inside multipass, but cannot do anything with my local install to make it work...
@mthaddon and @jnsgruk Perhaps we need to add this to the nginx controller daemonset.
template:
spec:
hostNetwork: true
@mthaddon and @jnsgruk Perhaps we need to add this to the nginx controller daemonset.
template: spec: hostNetwork: true
I tried to add hostNetwork: true , but the pods failed to recreate due to
Readiness probe failed: Get "http://host-ip:10254/healthz"
@imtiny can u upload the inspect tarball? Thanks
And if I remove the lines of Liveness and Readiness check configs from daemonset
Check one of the running nginx controller pods, I can see the nginx process by ps -ef
Then, add hostNetwork: true to daemonset
Check again, there is no nginx process
PID USER TIME COMMAND
1 www-data 0:00 /usr/bin/dumb-init -- /nginx-ingress-controller --configmap=ingress/nginx-load-balancer-microk8s-conf --tcp-services-configmap=ingress/nginx-ingress-tcp-microk8s-conf --udp-service
7 www-data 0:00 /nginx-ingress-controller --configmap=ingress/nginx-load-balancer-microk8s-conf --tcp-services-configmap=ingress/nginx-ingress-tcp-microk8s-conf --udp-services-configmap=ingress/ng
13 www-data 0:00 /bin/bash
20 www-data 0:00 ps -fe
Thank you @balchua Here is the version information
bash-5.1$ /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v0.44.0
Build: f802554ccfadf828f7eb6d3f9a9333686706d613
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.6
I think this is the nginx controller process. nginx-ingress-controller. Do you see port 80 or 443? Coz thats what others dont see.
@balchua I think it should look like this with nginx master process and nginx worker process, then port 80 and 443 is listening
bash-5.1$ ps -ef
PID USER TIME COMMAND
1 www-data 0:00 /usr/bin/dumb-init -- /nginx-ingress-controller --configmap=ingress/nginx-load-balancer-microk8s-conf --tcp-services-configmap=ingress/nginx-ingress-tcp-microk8s-conf --udp-service
6 www-data 0:00 /nginx-ingress-controller --configmap=ingress/nginx-load-balancer-microk8s-conf --tcp-services-configmap=ingress/nginx-ingress-tcp-microk8s-conf --udp-services-configmap=ingress/ng
24 www-data 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /etc/nginx/nginx.conf
34 www-data 0:00 nginx: worker process
35 www-data 0:00 nginx: worker process
36 www-data 0:00 nginx: cache manager process
37 www-data 0:00 nginx: cache loader process
102 www-data 0:00 /bin/bash
108 www-data 0:00 ps -ef
bash-5.1$ netstat -lntp
netstat: can't scan /proc - are you root?
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10245 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10246 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:10247 0.0.0.0:* LISTEN -
tcp 0 0 :::10254 :::* LISTEN -
tcp 0 0 :::80 :::* LISTEN -
tcp 0 0 :::80 :::* LISTEN -
tcp 0 0 :::8181 :::* LISTEN -
tcp 0 0 :::8181 :::* LISTEN -
tcp 0 0 :::443 :::* LISTEN -
tcp 0 0 :::443 :::* LISTEN -
If add hostNetwork: true to daemonset without Readiness and liveness check, it looks like above, there is no port 80 or 443 listening
I will have yo reproduce this. Though i didn't look at the processes, but the ingress flow works for me.
I will have yo reproduce this. Though i didn't look at the processes, but the ingress flow works for me.
Thank you, maybe I can try it again tomorow
If I add hostNetwork: true to the daemonset I also get Container nginx-ingress-microk8s failed liveness probe, will be restarted repeated, and still nothing listening on port 80 or 443
Hi guy,
Trying to reproduce this issue on a single node 1.21.
I didn't do the hostNetwok: true.
Im attaching my inspect tarball in case you can spot something.
Finally I disabled the microk8s ingress add-on, and deploy the nginx-ingress-controller using helm chart, which gives me more configuration flexibility
Here the values.yaml said that hostNetwork: true may be deprecated in the future
# Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),
# since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920
# is merged
hostNetwork: false
And it seems that deploy the nginx controller as Deployment instead of Daemonset and access by NodePort service from outside is a better choice
I'm also running into this issue, and it's as Tom described, I've reinstalled microk8s and something has become detached from listening on port 80/443/whatever the ingress controller tries to work out with the host.
I was able to resolve by deploying a new microk8s inside multipass as was mentioned above.
What I'm seeing as the main difference is a lack of CNI-* tables in iptables on my laptop, but the CNI tables exist on the fresh multipass VM.
Two notes. My laptop is running 21.04. My Multipass VM is running 20.04.
sudo iptables-save |grep CNI returns nothing on my laptop, but returns the following on the VM:
ubuntu@charm-dev:~$ sudo iptables-save|grep CNI
:CNI-DN-e356a00397c30183861a5 - [0:0]
:CNI-HOSTPORT-DNAT - [0:0]
:CNI-HOSTPORT-MASQ - [0:0]
:CNI-HOSTPORT-SETMARK - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j CNI-HOSTPORT-DNAT
-A OUTPUT -m addrtype --dst-type LOCAL -j CNI-HOSTPORT-DNAT
-A POSTROUTING -m comment --comment "CNI portfwd requiring masquerade" -j CNI-HOSTPORT-MASQ
-A CNI-DN-e356a00397c30183861a5 -s 10.1.157.68/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-e356a00397c30183861a5 -s 127.0.0.1/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-e356a00397c30183861a5 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.1.157.68:80
-A CNI-DN-e356a00397c30183861a5 -s 10.1.157.68/32 -p tcp -m tcp --dport 443 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-e356a00397c30183861a5 -s 127.0.0.1/32 -p tcp -m tcp --dport 443 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-e356a00397c30183861a5 -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.1.157.68:443
-A CNI-DN-e356a00397c30183861a5 -s 10.1.157.68/32 -p tcp -m tcp --dport 10254 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-e356a00397c30183861a5 -s 127.0.0.1/32 -p tcp -m tcp --dport 10254 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-e356a00397c30183861a5 -p tcp -m tcp --dport 10254 -j DNAT --to-destination 10.1.157.68:10254
-A CNI-HOSTPORT-DNAT -p tcp -m comment --comment "dnat name: \"k8s-pod-network\" id: \"dd06c1f4d36e541b756edd2b993ad9105610c8a3d9ca3a1c5c3e2ebdf355807d\"" -m multiport --dports 80,443,10254 -j CNI-DN-e356a00397c30183861a5
-A CNI-HOSTPORT-MASQ -m mark --mark 0x2000/0x2000 -j MASQUERADE
-A CNI-HOSTPORT-SETMARK -m comment --comment "CNI portfwd masquerade mark" -j MARK --set-xmark 0x2000/0x2000
inspection-report-20210709_142658.tar.gz
FYI, for support of 21.04 and later, microk8s inspect should use "ss" and "ip" commands instead of ifconfig and netstat.
Same on Fedora 34: no CNI records added to iptables, while same setup and same project on Ubuntu 20 works fine. Any ideas?
Exactly the same here on RHEL8. Any updates to this issue in 2022?
There were a couple of fixes recently in cleaning up IPtable rules. Would you be able to try the latest/edge build?
Possibly related to https://github.com/canonical/microk8s/issues/3092 .
Sure, refreshed with latest/edge channel and disabled+enabled ingress addon (in order to also apply addon updates if any, according to https://microk8s.io/docs/upgrading ).
However the ingress pod still is not ready, and I still cannot contact my service.
Can you get any hints from the inspection-report-20220622_110532_latest-edge.tar.gz?
Hi @KLBonn.
I'm trying to replicate the issue on a fresh CentOS 8 stream instance, but I cannot.
Here's what I do:
sudo snap install microk8s --classic latest/edge
sudo microk8s enable ingress
sudo microk8s kubectl create deploy --image cdkbot/microbot:1 microbot
sudo microk8s kubectl expose deploy microbot --port 80
sudo microk8s kubectl create ingress localhost/=microbot:80 microbot
Then test with:
[root@test-centos centos]# curl --silent localhost | grep hostname
<p class="centered">Container hostname: microbot-b6996696-p9528</p>
[root@test-centos centos]# microk8s.kubectl get pod,svc,ingress
NAME READY STATUS RESTARTS AGE
pod/microbot-b6996696-p9528 1/1 Running 0 54s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 16m
service/microbot ClusterIP 10.152.183.136 <none> 80/TCP 47s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/microbot public localhost 80 21s
From the inspection report and the nginx ingress logs you posted, I see this:
I0622 09:05:02.018698 6 main.go:176] "Received SIGTERM, shutting down"
I0622 09:05:02.018815 6 nginx.go:375] "Shutting down controller queues"
I0622 09:05:02.052540 6 status.go:130] "removing value from ingress status" address=[{IP:127.0.0.1 Hostname: Ports:[]}]
I0622 09:05:02.056003 6 status.go:299] "updating Ingress status" namespace="kube-system" ingress="tls-example-ingress" currentValue=[{IP:127.0.0.1 Hostname: Ports:[]}] newValue=[]
I0622 09:05:02.064286 6 nginx.go:391] "Stopping NGINX process"
2022/06/22 09:05:02 [warn] 176#176: the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
nginx: [warn] the "http2_max_field_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:143
2022/06/22 09:05:02 [warn] 176#176: the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
nginx: [warn] the "http2_max_header_size" directive is obsolete, use the "large_client_header_buffers" directive instead in /etc/nginx/nginx.conf:144
2022/06/22 09:05:02 [warn] 176#176: the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
nginx: [warn] the "http2_max_requests" directive is obsolete, use the "keepalive_requests" directive instead in /etc/nginx/nginx.conf:145
2022/06/22 09:05:02 [notice] 176#176: signal process started
I0622 09:05:03.123683 6 nginx.go:404] "NGINX process has stopped"
I0622 09:05:03.123741 6 main.go:184] Handled quit, delaying controller exit for 10 seconds
I0622 09:05:13.124507 6 main.go:187] "Exiting" code=0
Can you verify that your ingress configs are OK? One set of steps you can follow to ensure that this is not a networking issue:
sudo microk8s disable ingress
sudo reboot
sudo microk8s enable ingress
sudo microk8s kubectl apply -f ingress.yaml # create an ingress resource
Thanks @neoaggelos for diving into my issue! Indeed, after different attempts "fixing" ingress and nginx-controller definitions, I was very unsure if they were OK (and unable to verify it).
After following the steps you advised, exposing services actually works!
Most obvious difference: I did not include a reboot between disable ingress and enable ingress before. Could that really be the reason?
Whatever, thanks again 👍
(my use case (exposing the kubernetes-dashboard addon via ingress) still does not work due to certificate and URL-rewrite issues, but that's a different piece of homework 😃 )