stackstorm-k8s
stackstorm-k8s copied to clipboard
Change logic from capabilities to check version
I'm trying to understand the motivation behind this change: Is version compare more reliable than capabilities?
I get the following error message while deploying to a newer version of kubernetes:
no matches for kind "Ingress" in version "networking.k8s.io/v1beta1" ensure CRDs are installed first
While i don't get this error in any other helm chart
What version of kubernetes are you using?
These are my version numbers:
Client Version: v1.24.4+k3s1 Kustomize Version: v4.5.4 Server Version: v1.24.4+k3s1
Alternative can be to allow for configuring the ingress version like so
{{- if .Values.ingress.apiVersion -}}
{{- .Values.ingress.apiVersion -}}
What version of helm are you using?
Looking at https://helm.sh/docs/chart_template_guide/builtin_objects/ I don't see .Capabilities.KubeVersion.GitVersion, which is what you're using.
Could you create a helm chart or add a template that has something like this?
{{ .Capabilities.APIVersions | mustToPrettyJson }}
It's really weird that your k8s cluster is saying it doesn't have networking.k8s.io/v1. So I'm trying to diagnose why. Maybe "k3s" is a minimal cluster that doesn't respond to API queries about what it has? If so, that's really surprising.
I'm deploying this in AWS EKS right now and I get the same error while deploying. Kubernetes version: 1.21 EKS platform: eks.9 Helm: 3.8.0
@bertyah could you please get the list of API versions that helm sees with your cluster?
Something like this:
{{ .Capabilities.APIVersions | mustToPrettyJson }}
[authentication.k8s.io/v1/TokenReview authorization.k8s.io/v1/LocalSubjectAccessReview storage.k8s.io/v1/CSIDriver apps/v1 scheduling.k8s.io/v1beta1 node.k8s.io/v1 discovery.k8s.io/v1beta1 apps/v1/DaemonSet authorization.k8s.io/v1/SubjectAccessReview authorization.k8s.io/v1beta1/SelfSubjectAccessReview storage.k8s.io/v1/VolumeAttachment admissionregistration.k8s.io/v1beta1 node.k8s.io/v1beta1 v1/LimitRange networking.k8s.io/v1/Ingress rbac.authorization.k8s.io/v1beta1/ClusterRoleBinding v1 metrics.k8s.io/v1beta1 v1/PodProxyOptions authorization.k8s.io/v1beta1/LocalSubjectAccessReview autoscaling/v2beta1 rbac.authorization.k8s.io/v1/Role apiextensions.k8s.io/v1/CustomResourceDefinition rbac.authorization.k8s.io/v1beta1/Role v1/Namespace v1/PodAttachOptions v1/Service extensions/v1beta1/Ingress authorization.k8s.io/v1beta1/SelfSubjectRulesReview batch/v1/Job rbac.authorization.k8s.io/v1beta1/RoleBinding apiregistration.k8s.io/v1 scheduling.k8s.io/v1 flowcontrol.apiserver.k8s.io/v1beta1 crd.k8s.amazonaws.com/v1alpha1 apps/v1/Deployment certificates.k8s.io/v1beta1/CertificateSigningRequest policy/v1beta1/PodSecurityPolicy storage.k8s.io/v1/StorageClass authorization.k8s.io/v1beta1/SubjectAccessReview storage.k8s.io/v1beta1/CSIDriver scheduling.k8s.io/v1beta1/PriorityClass elbv2.k8s.aws/v1beta1/IngressClassParams autoscaling/v1 batch/v1 storage.k8s.io/v1 elbv2.k8s.aws/v1beta1 v1/ServiceProxyOptions networking.k8s.io/v1/IngressClass admissionregistration.k8s.io/v1beta1/MutatingWebhookConfiguration events.k8s.io/v1 autoscaling/v2beta2 certificates.k8s.io/v1beta1 v1/Event elbv2.k8s.aws/v1beta1/TargetGroupBinding elbv2.k8s.aws/v1alpha1 v1/NodeProxyOptions autoscaling/v1/HorizontalPodAutoscaler rbac.authorization.k8s.io/v1/ClusterRoleBinding rbac.authorization.k8s.io/v1beta1/ClusterRole coordination.k8s.io/v1beta1/Lease v1/Pod autoscaling/v2beta1/HorizontalPodAutoscaler networking.k8s.io/v1beta1/IngressClass authorization.k8s.io/v1/SelfSubjectAccessReview authentication.k8s.io/v1beta1 extensions/v1beta1 v1/Binding v1/Eviction events.k8s.io/v1beta1 networking.k8s.io/v1beta1 v1/Secret metrics.k8s.io/v1beta1/PodMetrics authentication.k8s.io/v1beta1/TokenReview flowcontrol.apiserver.k8s.io/v1beta1/FlowSchema vpcresources.k8s.aws/v1beta1/SecurityGroupPolicy v1/PodTemplate v1/Scale events.k8s.io/v1/Event apiregistration.k8s.io/v1/APIService v1/Node v1/PodPortForwardOptions storage.k8s.io/v1beta1/CSINode v1/PersistentVolumeClaim v1/ReplicationController crd.k8s.amazonaws.com/v1alpha1/ENIConfig apps/v1/StatefulSet certificates.k8s.io/v1/CertificateSigningRequest admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration admissionregistration.k8s.io/v1beta1/ValidatingWebhookConfiguration rbac.authorization.k8s.io/v1 apiextensions.k8s.io/v1beta1 apps/v1/Scale apps/v1/ReplicaSet v1/ResourceQuota v1/ServiceAccount authorization.k8s.io/v1/SelfSubjectRulesReview node.k8s.io/v1beta1/RuntimeClass batch/v1beta1 apiextensions.k8s.io/v1 v1/Endpoints v1/PersistentVolume autoscaling/v2beta2/HorizontalPodAutoscaler rbac.authorization.k8s.io/v1/RoleBinding coordination.k8s.io/v1/Lease apiregistration.k8s.io/v1beta1 coordination.k8s.io/v1 coordination.k8s.io/v1beta1 events.k8s.io/v1beta1/Event authentication.k8s.io/v1 authorization.k8s.io/v1 batch/v1beta1/CronJob storage.k8s.io/v1beta1/VolumeAttachment apiregistration.k8s.io/v1beta1/APIService networking.k8s.io/v1/NetworkPolicy rbac.authorization.k8s.io/v1/ClusterRole certificates.k8s.io/v1 vpcresources.k8s.aws/v1beta1 apps/v1/ControllerRevision storage.k8s.io/v1/CSINode scheduling.k8s.io/v1/PriorityClass elbv2.k8s.aws/v1alpha1/TargetGroupBinding networking.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 v1/ConfigMap v1/PodExecOptions authorization.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1/CustomResourceDefinition metrics.k8s.io/v1beta1/NodeMetrics v1/TokenRequest storage.k8s.io/v1beta1/StorageClass flowcontrol.apiserver.k8s.io/v1beta1/PriorityLevelConfiguration admissionregistration.k8s.io/v1/MutatingWebhookConfiguration discovery.k8s.io/v1beta1/EndpointSlice storage.k8s.io/v1beta1 admissionregistration.k8s.io/v1 v1/ComponentStatus policy/v1beta1/PodDisruptionBudget policy/v1beta1 networking.k8s.io/v1beta1/Ingress node.k8s.io/v1/RuntimeClass]
@bertyah and @TNAJanssen How are you running helm? Are you running it with ArgoCD or something.
FWIW, I have successfully deployed ingress resource with ALB in AWS using EKS.
Ingress is using apiVersion: networking.k8s.io/v1
EKS version: v1.22.15
Helm version: 3.10.2
I used to work with in-place ingress as part of the helm chart, but I found handling it outside of the helm chart was better for our use-case.
Yes. The chart should be using apiVersion: networking.k8s.io/v1. But, apparently in some situations, helm's Capabilities feature isn't working correctly. I'm trying to understand what situations cause this.
Just to clarify, I am using the chart from helm.stackstorm.com which I believe is 0.100.0 I am not cloning this repo.
@bertyah and @TNAJanssen How are you running helm? Are you running it with ArgoCD or something.
Are you manually running helm? Or is some other tool responsible for running helm for you? I'm thinking of tools like Ansible, StackStorm, ArgoCD, Jenkins, Pantsbuild, ...
I use some simple shell wrappers around helm so that I can run it the same way every time. Do you do something similar?
If you are manually running helm, where do you run it? In a linux VM? A Mac laptop? WSL2 on Windows?
@cognifloyd We have a docker container we use that has all the related utilities to work with k8. From there it's sort of a wrapper command tool that runs kubectl and helm.