microk8s
microk8s copied to clipboard
microk8s status --wait-ready holds forever
Summary
microk8s status --wait-ready
What is supposed to happen next? The command seemed to be executing for 24 hours until I had to Ctrl-C
^CTraceback (most recent call last):
File "/snap/microk8s/4966/scripts/wrappers/status.py", line 200, in <module>
isReady = wait_for_ready(timeout)
File "/snap/microk8s/4966/scripts/wrappers/common/utils.py", line 166, in wait_for_ready
time.sleep(2)
What Should Happen Instead?
I don't know what's supposed to happen next. The docs don't provide an example of expected results.
Reproduction Steps
- Run the command
- Wait forever.
Introspection Report
$ microk8s inspect Inspecting system Inspecting Certificates Inspecting services Service snap.microk8s.daemon-cluster-agent is running Service snap.microk8s.daemon-containerd is running Service snap.microk8s.daemon-kubelite is running Service snap.microk8s.daemon-k8s-dqlite is running Service snap.microk8s.daemon-apiserver-kicker is running Copy service arguments to the final report tarball Inspecting AppArmor configuration Gathering system information Copy processes list to the final report tarball Copy disk usage information to the final report tarball Copy memory usage information to the final report tarball Copy server uptime to the final report tarball Copy openSSL information to the final report tarball Copy snap list to the final report tarball Copy VM name (or none) to the final report tarball Copy current linux distribution to the final report tarball Copy network configuration to the final report tarball Inspecting kubernetes cluster Inspect kubernetes cluster Inspecting dqlite Inspect dqlite inspection-report-20230420_141858.tar.gz
Building the report tarball Report tarball is at /var/snap/microk8s/4966/inspection-report-20230420_141858.tar.gz
Can you suggest a fix?
No. I have no idea what's going on.
Are you interested in contributing with a fix?
How can I? I don't know what's wrong.
Hi @kingram6865
The microk8s status --wait-ready
command will wait for the API server to some up and for at least one node to be ready. Looking at the logs of kubelite (journalctl -fu snap.microk8s.daemon-kubelite
) I see the k8s services failing to start because of:
Apr 20 13:52:00 athena microk8s.daemon-kubelite[1930348]: E0420 13:52:00.651800 1930348 kubelet.go:1466] "Failed to start ContainerManager" err="system validation failed - Following Cgroup subsystem not mounted: [memory]"
In https://microk8s.io/docs/install-raspberry-pi we describe the steps you need to take to enable memory cgroups.
@kingram6865 you can apply --timeout
flag
also see https://github.com/canonical/microk8s/issues/3927
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.