Deployment spec properties are inadvertently removed when serializing
Summary
Various valid deployment spec properties are seemingly wiped out when I load it into a V1DeploymentSpec object, make a small change, then re-serialize to a dict.
Details
I am extracting a V1DeploymentSpec from a ClusterServiceVersion manifest, making a small mutation to it, then re-serializing it and adding it back into the ClusterServiceVersion. This leads to multiple mutations that were not intended. See the following sequence of diffs (none of which were intended) for reference:
@@ -356,287 +355,126 @@
- selector:
- matchLabels:
- name: my-operator
+ selector: {}
...
spec:
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: kubernetes.io/arch
- operator: In
- values:
- - s390x
- - amd64
+ affinity: {}
...
- name: WATCH_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.annotations['olm.targetNamespaces']
- name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
...
- imagePullPolicy: Always
- livenessProbe:
- failureThreshold: 4
- httpGet:
- path: /healthz
- port: 8081
- initialDelaySeconds: 15
- periodSeconds: 15
- successThreshold: 1
- timeoutSeconds: 3
- readinessProbe:
- failureThreshold: 3
- httpGet:
- path: /healthz
- port: 8081
- initialDelaySeconds: 15
- periodSeconds: 15
- successThreshold: 1
- timeoutSeconds: 3
/help
@roycaihw: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
I would like to take up this issue.
@whatsacomputertho What i understand from the description is that you are trying to load the specs from ClusterServiceVersion manifest and making small mutations, but with those mutations other unexpected mutations happen, could it be due to you loading the mutations to another V1DeploymentSpec object, which then reinitializes the other states of the objects to the state that they were defined as? such as with selector : {}?