community
community copied to clipboard
OpenSearch CRDs
Describe the bug During the chart installation there are 2 different CRDs added to the cluster:
- opensearchservice.services.k8s.aws
- opensearch.services.k8s.aws
When applying my Domain yaml - under API version opensearchservice.services.k8s.aws/v1alpha1
(as described in docs) and try to get my yaml by:
kubectl get Domain <my-domain-name>
I get an error: Error from server (NotFound): domains.opensearch.services.k8s.aws "
This is because is not under domains.opensearch.services.k8s.aws
but under domains.opensearchservice.services.k8s.aws
.
This is very confusing and it won't work when we try to read the status
of our Domain
YAML.
Why do we have both CRDs opensearchservice.services.k8s.aws and opensearch.services.k8s.aws?
Expected outcome As in any other AWS ACK Operator, I think we should have only one CRD per Kind
Environment
- Kubernetes version
- Using EKS - No
- AWS service targeted - opensearch
.
@oleg-yudovich Hi! Thank you for this issue! I've noticed the same in building out and debugging the Opensearchservice controller. It's super annoying behaviour and I suspect this is because the name of the service package in aws-sdk-go is different from the name of the model for the service in aws-sdk-go. In this case, the name of the service package is opensearchservice
and the name of the model is opensearch
. Yes, I know, it's unnecessarily inconsistent and annoying.
Somewhere in our generation of the CRD manifests, controller-gen crds
must be getting confused as to which is the API group. I will try to track down specifically what is going on and fix ASAP.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 60d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 60d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten