microk8s icon indicating copy to clipboard operation
microk8s copied to clipboard

Microk8s crashed upon status check; Cgroup not enabled false warning

Open RyzeNGrind opened this issue 2 years ago • 17 comments

Summary

I checked microk8s status and received crash error please check with mk inspect, once checking with mk inspect I receive warning mentioning cgroups is not enabled but I believe I already have it enabled upon further inspection on all arm64 and amd64 devices.

What Should Happen Instead?

Expected behavior is for mk status to give current microk8s cluster status with no issues since cgroups is already enabled on all nodes added to cluster context microk8s-cluster.

Reproduction Steps

  1. microk8s status
  2. microk8s inspect

Introspection Report

` Inspecting system Inspecting Certificates Inspecting services Service snap.microk8s.daemon-cluster-agent is running Service snap.microk8s.daemon-containerd is running Service snap.microk8s.daemon-kubelite is running Service snap.microk8s.daemon-k8s-dqlite is running Service snap.microk8s.daemon-apiserver-kicker is running Copy service arguments to the final report tarball Inspecting AppArmor configuration Gathering system information Copy processes list to the final report tarball Copy disk usage information to the final report tarball Copy memory usage information to the final report tarball Copy server uptime to the final report tarball Copy openSSL information to the final report tarball Copy snap list to the final report tarball Copy VM name (or none) to the final report tarball Copy current linux distribution to the final report tarball Copy network configuration to the final report tarball Inspecting kubernetes cluster Inspect kubernetes cluster Inspecting dqlite Inspect dqlite

WARNING: The memory cgroup is not enabled. The cluster may not be functioning properly. Please ensure cgroups are enabled See for example: https://microk8s.io/docs/install-alternatives#heading--arm Building the report tarball Report tarball is at /var/snap/microk8s/3596/inspection-report-20220806_182428.tar.gz `

Can you suggest a fix?

I checked if cgroups was enabled on amd64 host shogun and on one of my arm64 nodes calm-fox ryzengrind@shogun:~$ cat /proc/mounts | grep "cgroup" cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot 0 0 ryzengrind@calm-fox:~$ cat /proc/mounts | grep "cgroup" tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755,inode64 0 0 cgroup2 /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime 0 0 cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0 cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0 cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0 cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0 cgroup /sys/fs/cgroup/misc cgroup rw,nosuid,nodev,noexec,relatime,misc 0 0 cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0 cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0 cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0 cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset,clone_children 0 0 cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0 cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0

Not sure what next steps I should take to proceed here.

Are you interested in contributing with a fix?

yes

RyzeNGrind avatar Aug 06 '22 22:08 RyzeNGrind

Hi @RyzeNGrind, could you please attach the inspection tarball? Thank you.

ktsakalozos avatar Aug 08 '22 06:08 ktsakalozos

inspection-report-20220806_182428.tar.gz Forgot to attach it earlier, sorry.

RyzeNGrind avatar Aug 08 '22 23:08 RyzeNGrind

Hi @RyzeNGrind, could you please attach the inspection tarball? Thank you.

Attached tarball as requested.

RyzeNGrind avatar Aug 12 '22 19:08 RyzeNGrind

Hi. I have the same problem with microk8s version 1.25, but the problem disappears when downgrading to version 1.24

romainrossi avatar Sep 01 '22 15:09 romainrossi

Hi @romainrossi

Can you share the output of mount | grep cgroup on your machine? Can you also check if the commands below resolve your issue? They should force MicroK8s to use cgroups v1 instead:

sudo sed -i 's/${RUNTIME_TYPE}/io.containerd.runc.v1/' /var/snap/microk8s/current/args/containerd-template.toml
sudo snap restart microk8s.daemon-containerd

Thanks!

neoaggelos avatar Sep 07 '22 07:09 neoaggelos

Hi @neoaggelos

Thank you very much for your suggestion. I am working on Ubuntu 20.04 (focal) running on 3 different amd64 machines with Intel CPU. I got exactly the same issue as stated before on the 3 machines (one laptop, one desktop, one rack server).

Here is the requested command output

server $ mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)

desktop $ mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755,inode64)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/misc type cgroup (rw,nosuid,nodev,noexec,relatime,misc)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)

The test on the desktop machine only, before applying the suggested workaround.

$ snap refresh microk8s --channel=v1.25
$ microk8s.start
$ sudo microk8s.reset
$ microk8s.stop
$ microk8s.start

$ microk8s.inspect
Inspecting system
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-kubelite is running
  Service snap.microk8s.daemon-k8s-dqlite is running
 FAIL:  Service snap.microk8s.daemon-apiserver-proxy is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-apiserver-proxy
  Service snap.microk8s.daemon-apiserver-kicker is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy openSSL information to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting dqlite
  Inspect dqlite

Building the report tarball
  Report tarball is at /var/snap/microk8s/3827/inspection-report-20220910_093721.tar.gz

$ sudo journalctl -u snap.microk8s.daemon-apiserver-proxy | tail
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: + ARCH=x86_64
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: + export LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/3827/lib:/snap/microk8s/3827/usr/lib:/snap/microk8s/3827/lib/x86_64-linux-gnu:/snap/microk8s/3827/usr/lib/x86_64-linux-gnu
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: + LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/3827/lib:/snap/microk8s/3827/usr/lib:/snap/microk8s/3827/lib/x86_64-linux-gnu:/snap/microk8s/3827/usr/lib/x86_64-linux-gnu
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: + source /snap/microk8s/3827/actions/common/utils.sh
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: ++ [[ /snap/microk8s/3827/run-apiserver-proxy-with-args == \/\s\n\a\p\/\m\i\c\r\o\k\8\s\/\3\8\2\7\/\a\c\t\i\o\n\s\/\c\o\m\m\o\n\/\u\t\i\l\s\.\s\h ]]
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: + '[' -e /var/snap/microk8s/3827/var/lock/clustered.lock ']'
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: + echo 'Not a worker node, exiting'
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: Not a worker node, exiting
sept. 10 09:36:39 vault15 microk8s.daemon-apiserver-proxy[131031]: + exit 0
sept. 10 09:36:39 vault15 systemd[1]: snap.microk8s.daemon-apiserver-proxy.service: Succeeded.

The problem seems different than before (nothing about cgroup). Maybe an update of microk8 v1.25 happened in-between ? Still, the API server doesn't work...

Then, applying the workaround :

$ sudo sed -i 's/${RUNTIME_TYPE}/io.containerd.runc.v1/' /var/snap/microk8s/current/args/containerd-template.toml
$ sudo snap restart microk8s.daemon-containerd
Redémarré.
$ sudo microk8s.stop
Stoppé.
$ sudo microk8s.start
$ microk8s.inspect
[inspection-report-20220910_094423.tar.gz](https://github.com/canonical/microk8s/files/9539779/inspection-report-20220910_094423.tar.gz)

Inspecting system
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-kubelite is running
  Service snap.microk8s.daemon-k8s-dqlite is running
 FAIL:  Service snap.microk8s.daemon-apiserver-proxy is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-apiserver-proxy
  Service snap.microk8s.daemon-apiserver-kicker is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy openSSL information to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting dqlite
  Inspect dqlite

Building the report tarball
  Report tarball is at /var/snap/microk8s/3827/inspection-report-20220910_094423.tar.gz

$ sudo journalctl -u snap.microk8s.daemon-apiserver-proxy | tail
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: + ARCH=x86_64
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: + export LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/3827/lib:/snap/microk8s/3827/usr/lib:/snap/microk8s/3827/lib/x86_64-linux-gnu:/snap/microk8s/3827/usr/lib/x86_64-linux-gnu
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: + LD_LIBRARY_PATH=/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void::/snap/microk8s/3827/lib:/snap/microk8s/3827/usr/lib:/snap/microk8s/3827/lib/x86_64-linux-gnu:/snap/microk8s/3827/usr/lib/x86_64-linux-gnu
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: + source /snap/microk8s/3827/actions/common/utils.sh
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: ++ [[ /snap/microk8s/3827/run-apiserver-proxy-with-args == \/\s\n\a\p\/\m\i\c\r\o\k\8\s\/\3\8\2\7\/\a\c\t\i\o\n\s\/\c\o\m\m\o\n\/\u\t\i\l\s\.\s\h ]]
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: + '[' -e /var/snap/microk8s/3827/var/lock/clustered.lock ']'
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: + echo 'Not a worker node, exiting'
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: Not a worker node, exiting
sept. 10 09:43:26 vault15 microk8s.daemon-apiserver-proxy[141113]: + exit 0
sept. 10 09:43:26 vault15 systemd[1]: snap.microk8s.daemon-apiserver-proxy.service: Succeeded.

So the workaround didn't helped much... I am attaching the inspection tarball. Reverting to v1.24 solved the issue.

romainrossi avatar Sep 10 '22 07:09 romainrossi

Hi @romainrossi

Looking at the attached tarball, the cluster seems to be running fine (see output of kubectl below). Could it be that this is just a false alarm from the inspection script?

from `inspection-report/k8s/get-all`
NAMESPACE            NAME                                           READY   STATUS    RESTARTS      AGE   IP             NODE      NOMINATED NODE   READINESS GATES
kube-system          pod/hostpath-provisioner-664f557f54-52zrw      1/1     Running   3 (54s ago)   15m   10.1.241.143   vault15   <none>           <none>
metallb-system       pod/speaker-52ck6                              1/1     Running   3 (54s ago)   13m   192.168.8.15   vault15   <none>           <none>
metallb-system       pod/controller-f5cb789bc-2v6lc                 1/1     Running   2 (54s ago)   13m   10.1.241.142   vault15   <none>           <none>
container-registry   pod/registry-c68d59984-jtzcv                   1/1     Running   2 (54s ago)   15m   10.1.241.144   vault15   <none>           <none>
kube-system          pod/coredns-7847999f6f-kl9f9                   1/1     Running   2 (54s ago)   15m   10.1.241.140   vault15   <none>           <none>
kube-system          pod/calico-kube-controllers-54c85446d4-t7cpq   1/1     Running   2 (54s ago)   16m   10.1.241.141   vault15   <none>           <none>
kube-system          pod/calico-node-q4zgf                          1/1     Running   2 (54s ago)   16m   192.168.8.15   vault15   <none>           <none>

NAMESPACE            NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE    SELECTOR
default              service/kubernetes   ClusterIP   10.152.183.1     <none>        443/TCP                  106d   <none>
default              service/demo         ClusterIP   10.152.183.212   <none>        80/TCP                   102d   app=demo
container-registry   service/registry     NodePort    10.152.183.178   <none>        5000:32000/TCP           15m    app=registry
kube-system          service/kube-dns     ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   15m    k8s-app=kube-dns

NAMESPACE        NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE   CONTAINERS    IMAGES                          SELECTOR
kube-system      daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux        18m   calico-node   docker.io/calico/node:v3.23.3   k8s-app=calico-node
metallb-system   daemonset.apps/speaker       1         1         1       1            1           beta.kubernetes.io/os=linux   14m   speaker       metallb/speaker:v0.9.3          app=metallb,component=speaker

NAMESPACE            NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                IMAGES                                      SELECTOR
kube-system          deployment.apps/calico-kube-controllers   1/1     1            1           18m   calico-kube-controllers   docker.io/calico/kube-controllers:v3.23.3   k8s-app=calico-kube-controllers
kube-system          deployment.apps/coredns                   1/1     1            1           15m   coredns                   coredns/coredns:1.9.0                       k8s-app=kube-dns
container-registry   deployment.apps/registry                  1/1     1            1           15m   registry                  registry:2.7.1                              app=registry
metallb-system       deployment.apps/controller                1/1     1            1           14m   controller                metallb/controller:v0.9.3                   app=metallb,component=controller
kube-system          deployment.apps/hostpath-provisioner      1/1     1            1           15m   hostpath-provisioner      cdkbot/hostpath-provisioner:1.2.0           k8s-app=hostpath-provisioner

NAMESPACE            NAME                                                 DESIRED   CURRENT   READY   AGE   CONTAINERS                IMAGES                                      SELECTOR
kube-system          replicaset.apps/calico-kube-controllers-54c85446d4   1         1         1       16m   calico-kube-controllers   docker.io/calico/kube-controllers:v3.23.3   k8s-app=calico-kube-controllers,pod-template-hash=54c85446d4
kube-system          replicaset.apps/coredns-7847999f6f                   1         1         1       15m   coredns                   coredns/coredns:1.9.0                       k8s-app=kube-dns,pod-template-hash=7847999f6f
container-registry   replicaset.apps/registry-c68d59984                   1         1         1       15m   registry                  registry:2.7.1                              app=registry,pod-template-hash=c68d59984
metallb-system       replicaset.apps/controller-f5cb789bc                 1         1         1       14m   controller                metallb/controller:v0.9.3                   app=metallb,component=controller,pod-template-hash=f5cb789bc
kube-system          replicaset.apps/hostpath-provisioner-664f557f54      1         1         1       15m   hostpath-provisioner      cdkbot/hostpath-provisioner:1.2.0           k8s-app=hostpath-provisioner,pod-template-hash=664f557f54

Can you test whether you can use MicroK8s 1.25 (create a deployment, a service, etc)? Additionally, can you please provide the output of ls /var/snap/microk8s/current/var/lock? Thanks

neoaggelos avatar Sep 10 '22 09:09 neoaggelos

Hi @neoaggelos

I came across this issue report because the API server of MicroK8s 1.25 was not working on my project. I solved the issue by downgrading to 1.24, so definitely the cluster was not working fine.

But in the previous test I done this morning (see my previous message), I didn't got exactly the same issue (it was no mentioning cgroups anymore).

So, here is a new test performed now.

$ snap refresh microk8s --channel 1.25
$ sudo microk8s.reset
$ microk8s.enable registry #needed for my test
$ make push #build my docker image and push it to the microk8s registry
$ microk8s.kubectl apply -f deployment.yml
$ make magic # run my test which uses the API server
test successful

$ microk8s.inspect
Inspecting system
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-kubelite is running
  Service snap.microk8s.daemon-k8s-dqlite is running
 FAIL:  Service snap.microk8s.daemon-apiserver-proxy is not running
For more details look at: sudo journalctl -u snap.microk8s.daemon-apiserver-proxy
  Service snap.microk8s.daemon-apiserver-kicker is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy openSSL information to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting dqlite
  Inspect dqlite

Building the report tarball
  Report tarball is at /var/snap/microk8s/3827/inspection-report-20220910_184021.tar.g

$ ls /var/snap/microk8s/current/var/lock
cni-loaded  ha-cluster  lite.lock  no-etcd  no-flanneld  no-traefik

With this new test, the api server seems to work (because my project runs normally), but microk8s.inspect reports an error.

I am confused as I expected the test to fail like with the previous attempts. However the cluster has been reset 2 times today, maybe the API server problem come from something else...

Definitely, something has changed which seems to solve (at least partially) the original issue with cgroups.

I am sorry as this doesn't give useful information to tackle a problem. For the moment, I will keep v1.24 running on my server to be safe. Thank you very much for your time and advice !

romainrossi avatar Sep 10 '22 16:09 romainrossi

FAIL:  Service snap.microk8s.daemon-apiserver-proxy is not running

This is a false alarm, the apiserver-proxy is only needed for MicroK8s worker nodes. There should be a no-apiserver-proxy file under var/lock/ to signal this, but there does not seem to be any. The following should resolve the false alarm from the microk8s inspect. Could be worth reviewing the code to see how you got to this state however.

In any case, the command below should help resolve your microk8s inspect alarm as well:

sudo touch /var/snap/microk8s/current/var/lock/no-apiserver-proxy

Thanks for following up.

neoaggelos avatar Sep 12 '22 06:09 neoaggelos

I have same problem with v1.25. microk8s.inspect reports no error after I downgraded it to v1.24, but microk8s.status still shows:

microk8s is not running. Use microk8s inspect for a deeper inspection.

bioinformatist avatar Sep 23 '22 10:09 bioinformatist

Well, actually downgrading to v1.24 solve this problem. But GFW may still cause k8s initialization failed. See https://github.com/canonical/microk8s/issues/886#issuecomment-1256861859

bioinformatist avatar Sep 24 '22 05:09 bioinformatist

I had this too, adding the no-apiserver-proxy file removed the error message

sudo touch /var/snap/microk8s/current/var/lock/no-apiserver-proxy

... but microk8s still did not start.

That turned out to be an issue with certificates:

sudo microk8s.refresh-certs -c

The CA certificate will expire in 3282 days. The server certificate will expire in -3 days. The front proxy client certificate will expire in -3 days.

So then :

sudo microk8s.refresh-certs -e server.crt

gsnsw-felixs avatar Oct 24 '22 07:10 gsnsw-felixs

FAIL:  Service snap.microk8s.daemon-apiserver-proxy is not running

This is a false alarm, the apiserver-proxy is only needed for MicroK8s worker nodes. There should be a no-apiserver-proxy file under var/lock/ to signal this, but there does not seem to be any. The following should resolve the false alarm from the microk8s inspect. Could be worth reviewing the code to see how you got to this state however.

In any case, the command below should help resolve your microk8s inspect alarm as well:

sudo touch /var/snap/microk8s/current/var/lock/no-apiserver-proxy

Thanks for following up.

This solved it for me. Is there any ongoing work towards a fix for that? Currently using microk8s 1.25.3 from the 1.25/stable channel for testing, but I don't plan on updating the prod servers until this is sorted out

Pflegusch avatar Nov 21 '22 12:11 Pflegusch

@Pflegusch this has been fixed in latest master (https://github.com/canonical/microk8s/pull/3632) and backported to 1.25 as well (https://github.com/canonical/microk8s/pull/3633).

In any case, looks like the inspect script here gives a false positive regarding the underlying issue, and that has been dealt with as well.

neoaggelos avatar Jan 04 '23 16:01 neoaggelos

Updated from 1.24/stable (1.24.10) to 1.25/stable and running 1.25.5, the issue still persists as running microk8s inspect still gives me a

WARNING:  The memory cgroup is not enabled. 
The cluster may not be functioning properly. Please ensure cgroups are enabled

even tho cgroups are enabled and the cluster running perfectly fine else.

Pflegusch avatar Feb 07 '23 12:02 Pflegusch

Updated from 1.24/stable (1.24.10) to 1.25/stable and running 1.25.5, the issue still persists as running microk8s inspect still gives me a

WARNING:  The memory cgroup is not enabled. 
The cluster may not be functioning properly. Please ensure cgroups are enabled

even tho cgroups are enabled and the cluster running perfectly fine else.

am having exact same issue on 1.27.....please fix this!

imrj avatar Jul 09 '23 12:07 imrj