community
community copied to clipboard
Explicit error conditions in `FieldExport`
Is your feature request related to a problem?
When reconciling the FieldExport custom resource, the controller does not propagate any errors up into the conditions. Therefore, if there is a typo in the GroupKind, or a missing destination ConfigMap resource, the only indication that something went wrong is in the controller logs.
Describe the solution you'd like Any controller logs should be propagated to the resource conditions in the same way as other ACK resources so that users are able to detect and resolve errors without requiring access to pod logs.
Describe alternatives you've considered N/A
@RedbackThomson Could this also be the case for AdoptedResource too? Since it has its own reconciler, and is considered an Additive/Optional CR?
Could this also be the case for
AdoptedResourcetoo?
Absolutely! I think we have some very minimal condition handling in AdoptedResource, but that definitely needs some more love and attention
@RedbackThomson Should we duplicate this issue for AdoptedResource or use this one as a catchall?
@RedbackThomson Should we duplicate this issue for
AdoptedResourceor use this one as a catchall?
I will duplicate so we can track them separately
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
Rotten issues close after 60d of inactivity.
Reopen the issue with /reopen.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/close
@ack-bot: Closing this issue.
In response to this:
Rotten issues close after 60d of inactivity. Reopen the issue with
/reopen. Provide feedback via https://github.com/aws-controllers-k8s/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale