console icon indicating copy to clipboard operation
console copied to clipboard

Deployments from different CRD types with same name get bundled incorrectly.

Open rmkraus opened this issue 2 years ago • 1 comments

I have an instance where I have two deployments that are being created by different CRDs in my operator. Both of the object instances have the same, but are of different kinds.

Here is the obfuscated metadata for each deployment:

Workspace Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
  creationTimestamp: "2022-06-07T18:56:10Z"
  generation: 2
  labels:
    app.kubernetes.io/component: workspace
    app.kubernetes.io/name: app-name
    app.kubernetes.io/part-of: Workspaces
    app.openshift.io/runtime: python
  name: app-name-workspace
  namespace: devel
  ownerReferences:
  - apiVersion: example.com/v1alpha1
    kind: Workspace
    name: app-name
    uid: daec01fa-1b32-4a1e-9576-273a56d5bb2f
  resourceVersion: "18382342"
  uid: 8d5dd70e-8eaa-42af-8e44-e52a7b8855b3

Server Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2022-06-07T18:47:20Z"
  generation: 1
  labels:
    app.kubernetes.io/component: server
    app.kubernetes.io/name: app-name
    app.kubernetes.io/part-of: Servers
  name: app-name-server
  namespace: devel
  ownerReferences:
  - apiVersion: example.com/v1alpha1
    kind: Server
    name: app-name
    uid: 3fc6209d-878e-4b51-9cdc-464228f69e57
  resourceVersion: "18373908"
  uid: 4d62857b-d4a2-4d5a-96ff-76bbc4d666ef

The resulting view in the Developer Topology bundles both of these deployments into the app-name/Servers operator grouping and that operator grouping is put into the Workspaces Application grouping.

I would expect the server deployment to be in an app-name/Servers operator grouping that is in a Servers application grouping. The workspace deployment should be in an app-name/Workspaces operator grouping that is in a Workspaces application grouping.

rmkraus avatar Jun 07 '22 19:06 rmkraus

/cc @jerolimov

spadgett avatar Jul 25 '22 18:07 spadgett

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot avatar Oct 24 '22 01:10 openshift-bot

/remove-lifecycle stale

christoph-jerolimov avatar Oct 24 '22 15:10 christoph-jerolimov

Hello @rmkraus, sorry for the delay, missed this issue.

For the groups, we normally only the app.kubernetes.io/part-of label. You're saying the ownerReference changed the grouping part here as well? I tried the following, but could not reproduce your issue:

  1. Instead of a CRD I created a ConfigMap and a Secret:
apiVersion: v1
kind: ConfigMap
metadata:
  name: a-configmap
data: {}
---
apiVersion: v1
kind: Secret
metadata:
  name: a-secret
data: {}
  1. I created two Deployments based on your input and with a valid spec. After creating the ConfigMap and Secret I changed the ownerReference uid for both resources!
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: workspace
    app.kubernetes.io/name: app-name
    app.kubernetes.io/part-of: Workspaces
    app.openshift.io/runtime: python
  name: app-name-workspace
  ownerReferences:
  - apiVersion: v1
    kind: ConfigMap
    name: a-configmap
    uid: 8cac7ab4-acf2-4093-8f92-bba0723b645e
spec:
  selector:
    matchLabels:
      app: workspace
  replicas: 1
  template:
    metadata:
      labels:
        app: workspace
    spec:
      containers:
        - name: container
          image: image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest
          ports:
            - containerPort: 8080
              protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: server
    app.kubernetes.io/name: app-name
    app.kubernetes.io/part-of: Servers
  name: app-name-server
  ownerReferences:
  - apiVersion: v1
    kind: Secret
    name: a-secret
    uid: c87a8466-897e-4ec6-bffd-bf516a85a9da
spec:
  selector:
    matchLabels:
      app: server
  replicas: 1
  template:
    metadata:
      labels:
        app: server
    spec:
      containers:
        - name: container
          image: image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest
          ports:
            - containerPort: 8080
              protocol: TCP

The result looks like this on 4.10:

image

And on the latest master (upcoming 4.12) like this:

image

Can you share some more information which version you used exactly and maybe an example where you can reproduce this issue in a fresh namespace without your CRDs?

Thanks a lot for logging this bug and your patience, but I don't know how to reproduce at the moment. Will close this for now but please feel free to reopen it when you added more details.

/close

christoph-jerolimov avatar Oct 24 '22 15:10 christoph-jerolimov

@jerolimov: Closing this issue.

In response to this:

Hello @rmkraus, sorry for the delay, missed this issue.

For the groups, we normally only the app.kubernetes.io/part-of label. You're saying the ownerReference changed the grouping part here as well? I tried the following, but could not reproduce your issue:

  1. Instead of a CRD I created a ConfigMap and a Secret:
apiVersion: v1
kind: ConfigMap
metadata:
 name: a-configmap
data: {}
---
apiVersion: v1
kind: Secret
metadata:
 name: a-secret
data: {}
  1. I created two Deployments based on your input and with a valid spec. After creating the ConfigMap and Secret I changed the ownerReference uid for both resources!
apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
   app.kubernetes.io/component: workspace
   app.kubernetes.io/name: app-name
   app.kubernetes.io/part-of: Workspaces
   app.openshift.io/runtime: python
 name: app-name-workspace
 ownerReferences:
 - apiVersion: v1
   kind: ConfigMap
   name: a-configmap
   uid: 8cac7ab4-acf2-4093-8f92-bba0723b645e
spec:
 selector:
   matchLabels:
     app: workspace
 replicas: 1
 template:
   metadata:
     labels:
       app: workspace
   spec:
     containers:
       - name: container
         image: image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest
         ports:
           - containerPort: 8080
             protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
   app.kubernetes.io/component: server
   app.kubernetes.io/name: app-name
   app.kubernetes.io/part-of: Servers
 name: app-name-server
 ownerReferences:
 - apiVersion: v1
   kind: Secret
   name: a-secret
   uid: c87a8466-897e-4ec6-bffd-bf516a85a9da
spec:
 selector:
   matchLabels:
     app: server
 replicas: 1
 template:
   metadata:
     labels:
       app: server
   spec:
     containers:
       - name: container
         image: image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest
         ports:
           - containerPort: 8080
             protocol: TCP

The result looks like this on 4.10:

image

And on the latest master (upcoming 4.12) like this:

image

Can you share some more information which version you used exactly and maybe an example where you can reproduce this issue in a fresh namespace without your CRDs?

Thanks a lot for logging this bug and your patience, but I don't know how to reproduce at the moment. Will close this for now but please feel free to reopen it when you added more details.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

openshift-ci[bot] avatar Oct 24 '22 15:10 openshift-ci[bot]