operator-sdk
operator-sdk copied to clipboard
Pass uid as ansible_operator_meta.uid
Feature Request
Describe the problem you need a feature to resolve.
There are times during an Ansible Operator reconcile, when a k8s resource is created outside of Operator process. In order to help clean-up such resources by making them owned by the custom resource that is being reconciled, we will need to get hold of the uid of the custom resource. We can always fetch the custom resource in the Operator reconcile and get hold of the uid, but the custom resource might already have been fetched by the SDK and it can pass uid as ansible_operator_meta.uid. That would save an extra fetch on the custom resource.
Describe the solution you'd like.
Described above as part of problem statement.
Passing the uid as ansible_operator_meta.uid is an additive change and should not affect any Ansible Operator adopter
/language ansible
when a k8s resource is created outside of Operator process Don't really understand what you mean by this. If a resource is created outside the operator process, it should be handled by whatever created it, not the operator's controller. Do you have an example of what you mean here?
The only thing like that I can think if is when you create something like a Deployment which has dependent resources, those should be handled by the controller for the Deployment resource.
The only thing like that I can think if is when you create something like a Deployment which has dependent resources, those should be handled by the controller for the Deployment resource.
There are two examples in the usecase that I work on...
-
When a StatefulSet is created, the PVCs that get created from the
volumeClaimTemplatesof StatefulSet are not owned by anyone. The ownerReferences for the PVCs needs to be specified in theStatefulSet.volumeClaimTemplates -
In the CloudPak for Data platform from IBM that I work on, we have a usecase where certain resources like ConfigMap objects get created in a different container which is owned by the platform (triggered via a REST API call from Operator). Those ConfigMap objects which are created outside need to be owned later upon creation (not really an atomic operation though).
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.