microk8s
microk8s copied to clipboard
microk8s connect-external-ceph throws Error: INSTALLATION FAILED: failed to download "rook-release/rook-ceph-cluster"
Summary
In a clean install of:
- Ubuntu 22.04.3 LTS
- microk8s
- microceph
After installing the rook-ceph
addon I receive this message:
$ microk8s connect-external-ceph
Looking for MicroCeph on the host
Detected existing MicroCeph installation
Attempting to connect to Ceph cluster
Successfully connected to 9b685eff-ea73-4f42-8d89-5f7c98e19879 (192.168.0.196:0/4140507869)
WARNING: Pool microk8s-rbd0 already exists
Configuring pool microk8s-rbd0 for RBD
Successfully configured pool microk8s-rbd0 for RBD
Creating namespace rook-ceph-external
Error from server (AlreadyExists): namespaces "rook-ceph-external" already exists
Configuring Ceph CSI secrets
Successfully configured Ceph CSI secrets
Importing Ceph CSI secrets into MicroK8s
secret rook-ceph-mon already exists
configmap rook-ceph-mon-endpoints already exists
secret rook-csi-rbd-node already exists
secret csi-rbd-provisioner already exists
storageclass ceph-rbd already exists
Importing external Ceph cluster
Error: INSTALLATION FAILED: failed to download "rook-release/rook-ceph-cluster"
=================================================
Successfully imported external Ceph cluster. You can now use the following storageclass
to provision PersistentVolumes using Ceph CSI:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-rbd rook-ceph.rbd.csi.ceph.com Delete Immediate true 3m8s
The piece that I think is wrong from the error is:
Error: INSTALLATION FAILED: failed to download "rook-release/rook-ceph-cluster"
It looks that it should work, but the Storage Class
in Kubernetes
is throwing this error:
Waiting for a volume to be created either by the external provisioner 'rook-ceph.rbd.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
So, the storage class is not working properly and not provisioning storage.
What Should Happen Instead?
The Storage Class should provision storage correctly
Reproduction Steps
- Configure a clean install microk8s with 3 total nodes
- Configure a clean install microceph with 3 total nodes and 3 disks
- Run this commands:
$ microk8s enable rook-ceph
$ microk8s connect-external-ceph
Introspection Report
Inspecting system Inspecting Certificates Inspecting services Service snap.microk8s.daemon-cluster-agent is running Service snap.microk8s.daemon-containerd is running Service snap.microk8s.daemon-kubelite is running Service snap.microk8s.daemon-k8s-dqlite is running Service snap.microk8s.daemon-apiserver-kicker is running Copy service arguments to the final report tarball Inspecting AppArmor configuration Gathering system information Copy processes list to the final report tarball Copy disk usage information to the final report tarball Copy memory usage information to the final report tarball Copy server uptime to the final report tarball Copy openSSL information to the final report tarball Copy snap list to the final report tarball Copy VM name (or none) to the final report tarball Copy current linux distribution to the final report tarball Copy asnycio usage and limits to the final report tarball Copy inotify max_user_instances and max_user_watches to the final report tarball Copy network configuration to the final report tarball Inspecting kubernetes cluster Inspect kubernetes cluster Inspecting dqlite Inspect dqlite
WARNING: Maximum number of inotify user watches is less than the recommended value of 1048576. Increase the limit with: echo fs.inotify.max_user_watches=1048576 | sudo tee -a /etc/sysctl.conf sudo sysctl --system Building the report tarball Report tarball is at /var/snap/microk8s/6089/inspection-report-20231122_104030.tar.gz inspection-report-20231122_104030.tar.gz
Can you suggest a fix?
no
Are you interested in contributing with a fix?
no
I also tried with 1 node from microk8s and ceph and it gave me the same problem
Hi @adrianeguez
How did you enable the addon? Can you please try again with sudo?
sudo microk8s connect-external-ceph
Alternatively, it might help to add the rook-ceph
repo manually, then re-run the command:
microk8s helm repo add rook-release https://charts.rook.io/release
microk8s helm repo update
microk8s connect-external-ceph
If you are running some commands with sudo
, then it might be that the helm repos get mixed up, see for example
microk8s helm repo ls
sudo microk8s helm repo ls
Let me know if this helps! Thanks!
I'm getting this too. It appears to be failing to download, possibly a URL was updated somewhere? Digging into the source now I'll push a fix if I find anything. I do notice that the pod comes up but I can't connect to it when I try kubectl exec
or kubectl logs
.
Hi @v1nsai
It might help if you notice the difference between these two commands (since repos are stored in a different folder each time)
microk8s helm repo ls
sudo microk8s helm repo ls
In general, if you did sudo microk8s enable rook-ceph
, then sudo microk8s connect-external-ceph
is probably needed
@neoaggelos YES THAT'S IT! The repo shows up fine when running microk8s helm repo ls
without sudo
, but isn't there when using sudo. I probably didn't use sudo
when I ran microk8s enable rook-ceph
as I added my user to the microk8s
group to avoid it, but I was getting permissions errors when trying to run microk8s connect-external-ceph
without sudo
so perhaps I did?
To sum it up, what seems to be the fix if you have to run sudo microk8s connect-external-ceph
is to run sudo microk8s helm repo add rook-release https://charts.rook.io/release
first. It got me up and running!
After following all of the steps, everything was checking out fine, but I was still receiving the following error:
Error: INSTALLATION FAILED: failed to download "rook-release/rook-ceph-cluster"
and was able to fix it simply with:
sudo -E microk8s connect-external-ceph
The -E
flag retains environment variables from the user profile. Hope this helps!
Is there anyway that connect-external-ceph can bring up the ceph dashboard?