community
community copied to clipboard
Documentation on Errors
Is your feature request related to a problem? Many errors in ACK are communicated through conditions. We need to document this for ACK users. For example.
I have an ElastiCache manifest file like
apiVersion: elasticache.services.k8s.aws/v1alpha1
kind: ReplicationGroup
metadata:
name: rg
spec:
engine: redis
replicationGroupID: rg
replicationGroupDescription: test replication group
cacheNodeType: cache.t3.micro
numNodeGroups: 1
atRestEncryptionEnabled: true
transitEncryptionEnabled: true
replicasPerNodeGroup: 6
As of now allowed values for replicasPerNodeGroup are 0 to 5. However when someone runs kubectl, request would be accepted.
kubectl apply -f ~/Documents/rg.yaml
replicationgroup.elasticache.services.k8s.aws/rg created
Currently to see errors one need to run
kubectl describe replicationgroup/rg
Response will have error conditions
Conditions:
Message: The number of replicas per node group must be within 0 and 5.
Status: True
Type: ACK.Terminal
Describe the solution you'd like Document to rely on terminal conditions rather than kubectl/client response.
Describe alternatives you've considered N/A
It would also be nice to have some clarity on how the Terminal condition relates to other conditions.
There are some cases with our controller where a resource will have both the Terminal and ResourceSynced conditions set to True (because an invalid modification was attempted and the Terminal condition was set without changing the ResourceSynced condition).
If this is a bug, then we should make it clear in the documentation that these aren't supposed to be set at the same time; if this is normal then we should make it clear that the Terminal condition takes precedence over ResourceSynced. Either way it's not obvious (imo) what to expect from a user's perspective so it would be good to clarify these points.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/lifecycle frozen
We can add a page to documentation providing context on errors and status, and detailing what constitutes Terminal status for a given resource.
/remove-lifecycle frozen
The error conditions have been documented on the ACK website here: https://aws-controllers-k8s.github.io/community/docs/user-docs/resource-crud/#condition-types
Feel free to reopen the issue if you have any further comments! /close
@rushmash91: Closing this issue.
In response to this:
The error conditions have been documented on the ACK website here: https://aws-controllers-k8s.github.io/community/docs/user-docs/resource-crud/#condition-types
Feel free to reopen the issue if you have any further comments! /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.