argo-helm
argo-helm copied to clipboard
Argo rollouts templates do not include any keys in the data
Describe the bug
Following the example in the argo-rollouts values file results in this error:
one or more objects failed to apply, reason: error when patching "/dev/shm/1381037924": "" is invalid: patch: Invalid value: "map[data:- location: https://github.com/argoproj-labs/rollouts-plugin-trafficrouter-gatewayapi/releases/download/v0.4.0/gatewayapi-plugin-linux-arm64\n name: argoproj-labs/gatewayAPI metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{\"apiVersion\":\"v1\",\"data\":\"- location: https://github.com/argoproj-labs/rollouts-plugin-trafficrouter-gatewayapi/releases/download/v0.4.0/gatewayapi-plugin-linux-arm64\\n name: argoproj-labs/gatewayAPI\",\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"rollouts-controller\",\"app.kubernetes.io/instance\":\"argo-rollouts\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/name\":\"argo-rollouts\",\"app.kubernetes.io/part-of\":\"argo-rollouts\",\"app.kubernetes.io/version\":\"v1.7.1\",\"argocd.argoproj.io/instance\":\"argo-rollouts-dev\",\"helm.sh/chart\":\"argo-rollouts-2.37.3\"},\"name\":\"argo-rollouts-config\",\"namespace\":\"argo-rollouts\"}}\n]]]": cannot restore map from string. Retrying attempt #2 at 2:02PM.
Related helm chart
argo-rollouts
Helm chart version
argo-rollouts:2.37.3
To Reproduce
- Create this application set:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: argo-rollouts
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- list:
elements:
- cluster: dev
version: 2.37.3
template:
metadata:
name: 'argo-rollouts-{{.cluster}}'
namespace: argocd
labels:
name: '{{.cluster}}-applications'
spec:
project: default
source:
chart: argo-rollouts
repoURL: https://argoproj.github.io/argo-helm
targetRevision: '{{.version}}'
helm:
releaseName: argo-rollouts
valuesObject:
controller:
metrics:
enabled: true
trafficRouterPlugins: |-
- name: "argoproj-labs/gatewayAPI"
location: "https://github.com/argoproj-labs/rollouts-plugin-trafficrouter-gatewayapi/releases/download/v0.4.0/gatewayapi-plugin-linux-arm64"
destination:
name: '{{.cluster}}'
namespace: argo-rollouts
syncPolicy:
automated:
selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
retry:
limit: 5 # number of failed sync attempt retries; unlimited number of attempts if less than 0
backoff:
duration: 5s # the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
factor: 2 # a factor to multiply the base duration after each failed retry
maxDuration: 3m # the maximum amount of time allowed for the backoff strategy
syncOptions:
- CreateNamespace=true
revisionHistoryLimit: 10
- Apply the applicationset to the cluster hosting argocd
- Observe the argo-rollouts-config configmap generated having the values for
trafficRouterPlugins
slapped right in data without the expectedtrafficRouterPlugins
sub key and k8s complaining with an error specifying that there's trouble restoring a map from string (see bug description)
Expected behavior
The specified applicationset creates this configmap:
apiVersion: v1
data:
trafficRouterPlugins: |-
- location:
https://github.com/argoproj-labs/rollouts-plugin-trafficrouter-gatewayapi/releases/download/v0.4.0/gatewayapi-plugin-linux-arm64
name: argoproj-labs/gatewayAPI
instead of trying to stand up a bad configmap
Screenshots
No response
Additional context
I'm currently working around the issue with this:
...
valuesObject:
controller:
metrics:
enabled: true
trafficRouterPlugins:
trafficRouterPlugins: |-
- location: https://github.com/argoproj-labs/rollouts-plugin-trafficrouter-gatewayapi/releases/download/v0.4.0/gatewayapi-plugin-linux-arm64
name: argoproj-labs/gatewayAPI
I believe the issue is here. From what I can tell, both trafficRouterPlugins
and metricProviderPlugins
have their values smashed into the root data
key rather than having their own subkeys.
I could open a PR to fix this but I'm unclear of my company's OSS contribution guidelines so I'm staying with my workaround for now.