Erroneous annotations are silently dropped when deploying plain manifests
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
I pushed a plain Ingress manifest to a GitRepo with some annotations and I can see them in the corresponding Bundle, which appears to be successfully deployed:
apiVersion: fleet.cattle.io/v1alpha1
kind: Bundle
[...]
spec:
[...]
resources:
[...]
- content: |
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flame
namespace: flame
annotations:
cert-manager.io/issuer: letsencrypt-prod
flame.pawelmalak/type: app
flame.pawelmalak/name: MDAPI Home (dev)
flame.pawelmalak/url: https://dev.mdapi.ch
flame.pawelmalak/category: General
flame.pawelmalak/icon: home
flame.pawelmalak/order: 50
spec:
ingressClassName: nginx
rules:
- host: dev.mdapi.ch
http:
paths:
- backend:
service:
name: flame
port:
number: 5005
path: /
pathType: Prefix
tls:
- hosts:
- dev.mdapi.ch
secretName: flame-cert
name: flame-ing.yml
[...]
status:
[...]
display:
readyClusters: 1/1
[...]
This is the resulting Ingress resource in the target cluster:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
meta.helm.sh/release-name: mdapi-dev-flame
meta.helm.sh/release-namespace: flame
objectset.rio.cattle.io/id: default-mdapi-dev-flame
creationTimestamp: '2024-09-24T13:45:47Z'
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
objectset.rio.cattle.io/hash: 69148ad9c9843b2fcc4414020489bfc2e222a250
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:objectset.rio.cattle.io/id: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
f:objectset.rio.cattle.io/hash: {}
f:spec:
f:ingressClassName: {}
f:rules: {}
f:tls: {}
manager: fleetagent
operation: Update
time: '2024-09-24T13:45:47Z'
name: flame
namespace: flame
resourceVersion: '2098132'
uid: 54fea8e0-9598-4800-890c-76fb97d0b7e7
spec:
ingressClassName: nginx
rules:
- host: dev.mdapi.ch
http:
paths:
- backend:
service:
name: flame
port:
number: 5005
path: /
pathType: Prefix
tls:
- hosts:
- dev.mdapi.ch
secretName: flame-cert
status:
loadBalancer: {}
My own annotations in the Ingress metadata.annotations field have been entirely replaced by meta.helm.sh/release-name, meta.helm.sh/release-namespace and objectset.rio.cattle.io/id.
Expected Behavior
I would expect the metadata.annotations field of the resulting Ingress to look like this:
metadata:
annotations:
cert-manager.io/issuer: letsencrypt-prod
flame.pawelmalak/type: app
flame.pawelmalak/name: MDAPI Home (dev)
flame.pawelmalak/url: https://dev.mdapi.ch
flame.pawelmalak/category: General
flame.pawelmalak/icon: home
flame.pawelmalak/order: 50
meta.helm.sh/release-name: mdapi-dev-flame
meta.helm.sh/release-namespace: flame
objectset.rio.cattle.io/id: default-mdapi-dev-flame
I don't think I would have any issues with Fleet or Helm replacing their specific identifiers if they were previously set, but by removing any other annotation they are effectively truncating the provided manifest.
Steps To Reproduce
- Install Rancher release v2.9.2 including Fleet 104.0.2+up0.10.2 and add a target cluster.
- Deploy a Bundle to the target cluster including an Ingress manifest with annotations.
- Observe resulting Ingress in the target cluster resource missing these annotations.
Environment
- Architecture: amd64
- Fleet Version: 0.10.2
- Rancher Cluster:
- Provider: K3s
- Options: 1 node (VM running on Harvester on 3 bare-metal nodes)
- Kubernetes Version: v1.30.4+k3s1
- Target Cluster:
- Provider: K3s
- Options: 1 node (TrueNAS bare-metal node)
- Kubernetes Version: v1.26.6-dirty
Logs
No response
Anything else?
No response
If you think I'm doing or getting anything wrong, please ping me on Slack where I started a thread on this topic and can discuss it.
Welcome to YAML hell: I cannot use a number in an annotation, as it must be a string.
I fixed it by using flame.pawelmalak/order: '50' instead of flame.pawelmalak/order: 50.
Still, I had to deploy the manifest manually to reveal the mistake; I expect Fleet to report errors. I'm going to update the Issue title to reflect the nuance.
- [ ] check metadata of all deployed resources for invalid k8s json schema