oam-kubernetes-runtime
oam-kubernetes-runtime copied to clipboard
[Feature] Proper defaulting labels for metadata of workload
Is your feature request related to a problem? Please describe.
As discussed in: https://github.com/crossplane/oam-kubernetes-runtime/issues/136, OAM runtime should automatically generate labels for Workload (ref: k8s recommend labels) so Trait can choose to select the workload by leveraging these default labels if they don't want to define things like app: nginx.
I'd propose several auto labels below:
component.oam.dev/name: <component's metadata.name>
component.oam.dev/revision: <revision number of the component>
How to use these lables:
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: frontend
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
name: my-app-deployment
spec:
components:
- componentName: frontend
traits:
- apiVersion: v1
kind: Service
spec:
selector:
component.oam.dev/name: frontend
component.oam.dev/revision: 1 # add this if this trait wants to select workload at specific revision
ports:
- protocol: TCP
port: 80
targetPort: 9376
Note that this proposal relies on: https://github.com/crossplane/oam-kubernetes-runtime/pull/175/files
workload.oam.dev/name: <same with workload's metadata.name>
Why do we need this when the name is already in the meta?
workload.oam.dev/revision: <revision number>
What revision is this? It looks like it's not the component revision number.
@ryanzhang-oss These labels are all component's info, not workload, I've updated the label key and example.
This is an important issue, with this label, trait can find underlying pods easily.
It's helpful at least for below two cases:
- ingress/service/traffic trait can easily route their traffic to pods with the information of OAM AppConfig/Component.
- log/metrics trait can easily find which pods need to gather logs or metrics by OAM AppConfig.
What's more, any trait can rely on K8s label selector ability to find underlying resource through abstraction layer.
Besides these to labes:
component.oam.dev/name: <component's metadata.name>
component.oam.dev/revision: <revision number of the component>
I propose we add one more to indicate which AppConfig instance it is:
appconfig.oam.dev/name: <appconfig's metadata.name>
Not sure how about app.oam.dev/name: <appconfig's metadata.name>?
For other points, agree.
Note that this proposal relies on: https://github.com/crossplane/oam-kubernetes-runtime/pull/175/files
For ContainerizedWorkload, the labels of the workload can be propagated to deployments and pods, but for other non-core workloads, like deployment, the labels could not be automatically generated for its pods (https://github.com/crossplane/oam-kubernetes-runtime/issues/184 in details).
So need to find a way to propagate all workloads' labels to pod template.