uyuni icon indicating copy to clipboard operation
uyuni copied to clipboard

Add Uyuni proxy and server helm charts to artifacthub.io

Open cbosdo opened this issue 1 year ago • 4 comments

To raise awareness in the kubernetes community, it would be nice to add the helm charts to https://artifacthub.io/

cbosdo avatar Feb 02 '24 08:02 cbosdo

I came here after trying to install uyuni-proxy in Rancher-based clusters and failed.

Infrastructure information:

  • one Rancher server
  • multiple Kubernetes clusters (RKE-, K3s-based) managed via Rancher
  • in addition to Rancher Server WebUI, there's a "Linux admin node" with KUBECONFIG setups to be able to use CLI tools to access any of the clusters

First attempt:

  • provide "mgrpxy" command for the Linux admin node.
  • "kubectl get nodes" reports all nodes of the target cluster (any other kubectl call will work towards that cluster, too)
  • tried to install uyuni-proxy via the command given for the K3s installation:
rancheradm@rancher-admin:~/clusters/rancher.customer.com/kube> mgrpxy install kubernetes ~/uyuniproxy.config.5-0-0.tar.gz --logLevel=debug
7:09PM INF mgrpxy/cmd/cmd.go:40 > Welcome to mgrpxy
7:09PM INF mgrpxy/cmd/cmd.go:41 > Executing command: kubernetes
7:09PM DBG shared/utils/tar.go:61 > Extracting file /tmp/mgrpxy-1922755939/config.yaml
7:09PM DBG shared/utils/tar.go:61 > Extracting file /tmp/mgrpxy-1922755939/httpd.yaml
7:09PM DBG shared/utils/tar.go:61 > Extracting file /tmp/mgrpxy-1922755939/ssh.yaml
7:09PM DBG shared/utils/exec.go:49 > Running: kubectl get node -o jsonpath={.status.nodeInfo.kubeletVersion} rancher-admin
7:09PM DBG shared/utils/exec.go:49 > Running: kubectl get node -o jsonpath={.status.nodeInfo.kubeletVersion} rancher-admin
7:09PM FTL shared/kubernetes/kubernetes.go:54 > Failed to get kubelet version for node rancher-admin error="exit status 1"
rancheradm@rancher-admin:~/clusters/rancher.customer.com/kube>

As you can see from the debug, mgrpry assumes the local host to be a member of the cluster (to be fair, the docs start with the installation of K3s on "the container host machine" and never changes away from that host). But as the Kubernetes environment is a multi-node cluster, a better approach could be to only rely on having the proper KUBECONFIG setup, running mgrpxy from any machine with such KUBECONFIG.

Second attempt:

  • using the Rancher webUI, adding the Helm chart repository for Uyuni proxy failed, lacking a usable repository URL (or maybe even lacking a usable Helm repository...) Adding the Helm chart via Github project URL failed, too. Using the uci:// URL fails, since this is not supported via Rancher-Helm

Third attempt:

  • using Helm on the command line (Linux admin node) fails, too:
helm install uyuni-proxy oci://registry.opensuse.org/uyuni/proxy-helm -f uyuni/config.yaml -f uyuni/httpd.yaml -f uyuni/ssh.yaml
Pulled: registry.opensuse.org/uyuni/proxy-helm:2024.2.0
Digest: sha256:4e49a26160763ebd718712042e59cb6842ef30923aeb1f191e956e08a48becdc
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "ssl-router" namespace: "default" from "": no matches for kind "IngressRouteTCP" in version "traefik.containo.us/v1alpha1"
ensure CRDs are installed first, resource mapping not found for name: "ssh-router" namespace: "default" from "": no matches for kind "IngressRouteTCP" in version "traefik.containo.us/v1alpha1"
ensure CRDs are installed first, resource mapping not found for name: "salt-publish-router" namespace: "default" from "": no matches for kind "IngressRouteTCP" in version "traefik.containo.us/v1alpha1"
ensure CRDs are installed first, resource mapping not found for name: "salt-request-router" namespace: "default" from "": no matches for kind "IngressRouteTCP" in version "traefik.containo.us/v1alpha1"
ensure CRDs are installed first, resource mapping not found for name: "tftp-router" namespace: "default" from "": no matches for kind "IngressRouteUDP" in version "traefik.containo.us/v1alpha1"
ensure CRDs are installed first]

Seems like I'm missing a step to install the CRDs or the Helm chart should be self-sustained, providing the CRDs itself. (And pls don't mind the missing fixed LB address - we'll get to that once the Helm chart is installable at all.)

So I second the notion to provide the helm charts via some publicly available Helm chart repository. Whether that's to be artefacthub.io or a SUSE-specific one usable via Helm... well, as long as new releases are distributed automatically, I'm fine with both (and that's meant as "both, not either one of them" ;) )

jmozd avatar Mar 24 '24 20:03 jmozd

7:09PM DBG shared/utils/exec.go:49 > Running: kubectl get node -o jsonpath={.status.nodeInfo.kubeletVersion} rancher-admin
7:09PM FTL shared/kubernetes/kubernetes.go:54 > Failed to get kubelet version for node rancher-admin error="exit status 1"
rancheradm@rancher-admin:~/clusters/rancher.customer.com/kube>

As you can see from the debug, mgrpry assumes the local host to be a member of the cluster (to be fair, the docs start with the installation of K3s on "the container host machine" and never changes away from that host). But as the Kubernetes environment is a multi-node cluster, a better approach could be to only rely on having the proper KUBECONFIG setup, running mgrpxy from any machine with such KUBECONFIG.

This is worth a bug report for uyuni-tools.

Second attempt:

* using the Rancher webUI, adding the Helm chart repository for Uyuni proxy failed, lacking a usable repository URL (or maybe even lacking a usable Helm repository...) Adding the Helm chart via Github project URL failed, too. Using the uci:// URL fails, since this is not supported via Rancher-Helm

For now the only place where the helm charts are published is an OCI registry... For this case publishing to artifacthub.io will help indeed.

ensure CRDs are installed first, resource mapping not found for name: "tftp-router" namespace: "default" from "": no matches for kind "IngressRouteUDP" in version "traefik.containo.us/v1alpha1"
ensure CRDs are installed first]

Seems like I'm missing a step to install the CRDs or the Helm chart should be self-sustained, providing the CRDs itself. (And pls don't mind the missing fixed LB address - we'll get to that once the Helm chart is installable at all.)

You are trying to install on a cluster where traefik is not installed. Add the --set ingress=nginx parameter to setup the helm chart for NGINX instead of traefik. Traefik is the default for k3s, but not rke2 for instance.

cbosdo avatar Mar 25 '24 07:03 cbosdo

You are trying to install on a cluster where traefik is not installed. Add the --set ingress=nginx parameter to setup the helm chart for NGINX instead of traefik.

Thank you, that did the trick, Helm chart got installed via CLI:

rancheradm@rancher-admin:~/clusters/rancher.customer.com/kube> helm install uyuni-proxy oci://registry.opensuse.org/uyuni/proxy-helm -f uyuni/config.yaml -f uyuni/httpd.yaml -f uyuni/ssh.yaml --set ingress=nginx
Pulled: registry.opensuse.org/uyuni/proxy-helm:2024.2.0
Digest: sha256:4e49a26160763ebd718712042e59cb6842ef30923aeb1f191e956e08a48becdc
NAME: uyuni-proxy
LAST DEPLOYED: Mon Mar 25 07:53:36 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
rancheradm@rancher-admin:~/clusters/rancher.customer.com/kube>

jmozd avatar Mar 25 '24 07:03 jmozd

7:09PM DBG shared/utils/exec.go:49 > Running: kubectl get node -o jsonpath={.status.nodeInfo.kubeletVersion} rancher-admin
7:09PM FTL shared/kubernetes/kubernetes.go:54 > Failed to get kubelet version for node rancher-admin error="exit status 1"
rancheradm@rancher-admin:~/clusters/rancher.customer.com/kube>

As you can see from the debug, mgrpry assumes the local host to be a member of the cluster (to be fair, the docs start with the installation of K3s on "the container host machine" and never changes away from that host). But as the Kubernetes environment is a multi-node cluster, a better approach could be to only rely on having the proper KUBECONFIG setup, running mgrpxy from any machine with such KUBECONFIG.

This is worth a bug report for uyuni-tools.

This should be fixed by https://github.com/uyuni-project/uyuni-tools/pull/213

cbosdo avatar Mar 25 '24 08:03 cbosdo