cloud-credential-operator
cloud-credential-operator copied to clipboard
Set metadata.ownerReferences
Status Quo
Currently, the only way to connect a CredentialsRequest to its associated Secret(s) seems to be via annotation cloudcredential.openshift.io/credentials-request on the Secret(s).
I am not sure if this is documented/stable behaviour; I could not find any mention of it in the documentation.
Proposal
Set metadata.ownerReferences as per the standard.
Precedence
external-secrets/external-secrets sets owner references on the Secret(s) created from ExternalSecret.
Use Case
In tools such as ArgoCD, it is useful to see all owned resources. Consider, for instance, this situation:
ℹ️ I have created an issue for considering cloudcredential.openshift.io/credentials-request in ArgoCD: argoproj/argo-cd#15353
That said, my feeling is that adding ownerReferences here is the more sustainable approach overall.
Hello @reitzig,
I did some digging into the reasoning for the current design and found this comment.
https://github.com/openshift/cloud-credential-operator/blob/2c3298b1bb3aca9d7ebcde541b49dcef039e108a/pkg/operator/credentialsrequest/credentialsrequest_controller.go#L140-L143
From the k8s docs: "A valid owner reference consists of the object name and a UID within the same namespace as the dependent object."
The issue with using owner references is that OpenShift allows CredentialsRequests objects to be in namespaces other than the resultant secret namespace. During installation, component CredentialsRequests are created within the openshift-cloud-credential-operator namespace while the secrets they would own end up in various different namespaces.
One thing we could do is say that the Cloud Credential ClusterOperator owns all secrets managed by CredentialsRequests, credit to @wking for the idea.
For example, the Cluster Version Operator similarly owns the openshift-cloud-credential-operator namespace and uses its cluster scoped config.openshift.io/v1 object as the owner.
$ oc get -o json namespace openshift-cloud-credential-operator | jq '.metadata.ownerReferences[]'
{
"apiVersion": "config.openshift.io/v1",
"controller": true,
"kind": "ClusterVersion",
"name": "version",
"uid": "7470fa17-d654-49af-b99b-f8a4c16a0e1f"
}
Oh, I see, didn't read carefully enough. For reference, the docs are very clear:
Cross-namespace owner references are disallowed by design. [...] A namespaced owner must exist in the same namespace as the dependent. If it does not, the owner reference is treated as absent, and the dependent is subject to deletion once all owners are verified absent.
As for your proposal:
One thing we could do is say that the Cloud Credential ClusterOperator owns all secrets managed by CredentialsRequests
That would be better than nothing, I guess, but would do little to encode the relationship between a CredentialsRequest and its Secret.
Going beyond the wording of the OP (I think we're safely past that), I was wondering. Why is it that the operator is limited to that single namespace openshift-cloud-credential-operator? If we were to create CredentialsRequest in the "target" namespace, owner references would be allowed! external-secrets doesn't have that limitation so I presume that particular design choice is not due to security concerns?
I'm not yet sure why we've chosen to put all of the OpenShift component CredentialsRequest objects in the openshift-cloud-credential-operator namespace during installation. This might be something that we could change but may run into compatibility problems if anything relies on the existing design.
CredentialsRequest definitions for component operators are spread out among repositories for those components: cluster-ingress-operator's manifest for example.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
@abutcher , any news on this? Should I keep this open or let it die?
Howdy! Andrew has moved off of CCO at this point. @jstuever @dlom would y'all mind having a look at this and seeing if it's still relevant?