AdoptedResource S3 is not working
I did a lot of testing but apparently AdoptedResource for s3 is not working.
the generations here are incrementing I guess because I'm testing out different things but if there's a log mentioning that that bucket got adopted it is not shown.
s3-controller:1.0.4:
2023-05-23T17:17:42.147Z INFO adoption.adopted-reconciler starting adoption reconciliation {"target_group": "s3.services.k8s.aws", "target_kind": "Bucket", "namespace": "myns", "name": "mybucket", "generation": 2}
2023-05-23T17:18:06.888Z INFO adoption.adopted-reconciler starting adoption reconciliation {"target_group": "s3.services.k8s.aws", "target_kind": "Bucket", "namespace": "myns", "name": "mybucket", "generation": 3}
2023-05-23T17:19:10.404Z INFO adoption.adopted-reconciler starting adoption reconciliation {"target_group": "s3.services.k8s.aws", "target_kind": "Bucket", "namespace": "myns", "name": "mybucket", "generation": 4}
2023-05-23T17:20:57.203Z INFO adoption.adopted-reconciler starting adoption reconciliation {"target_group": "s3.services.k8s.aws", "target_kind": "Bucket", "namespace": "myns", "name": "mybucket", "generation": 5}
bucket status
Status:
Ack Resource Metadata:
Owner Account ID: xxxxx
Region: eu-west-1
Conditions:
Last Transition Time: 2023-05-23T17:09:19Z
Message: Resource already exists
Reason: This resource already exists but is not managed by ACK. To bring the resource under ACK management, you should explicitly adopt the resource by creating a services.k8s.aws/AdoptedResource
Status: True
Type: ACK.Terminal
Last Transition Time: 2023-05-23T17:09:19Z
Message: Resource not synced
Reason: resource is in terminal condition
Status: False
Type: ACK.ResourceSynced
apiVersion: services.k8s.aws/v1alpha1
kind: AdoptedResource
metadata:
name: mybucket-adopted
spec:
kubernetes:
group: s3.services.k8s.aws
kind: Bucket
# tested also with metadata, no change
aws:
# tested also with nameOrID, no change
arn: arn:aws:s3:::mybucket
Name: mybucket-adopted
Namespace: myns
Labels: kustomize.toolkit.fluxcd.io/name=apps
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: <none>
API Version: services.k8s.aws/v1alpha1
Kind: AdoptedResource
Metadata:
Creation Timestamp: 2023-03-15T11:19:26Z
Finalizers:
finalizers.services.k8s.aws/AdoptedResource
Generation: 5
Managed Fields:
API Version: services.k8s.aws/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:kustomize.toolkit.fluxcd.io/name:
f:kustomize.toolkit.fluxcd.io/namespace:
f:spec:
f:aws:
f:arn:
f:kubernetes:
f:group:
f:kind:
Manager: kustomize-controller
Operation: Apply
Time: 2023-05-23T17:20:57Z
API Version: services.k8s.aws/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"finalizers.services.k8s.aws/AdoptedResource":
Manager: controller
Operation: Update
Time: 2023-03-15T11:19:26Z
API Version: services.k8s.aws/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
Manager: controller
Operation: Update
Subresource: status
Time: 2023-03-15T11:19:26Z
Resource Version: 351351889
UID: 0fd2bb2d-f9b2-463c-ac60-f99e9ac189ec
Spec:
Aws:
Arn: arn:aws:s3:::mybucket
Kubernetes:
Group: s3.services.k8s.aws
Kind: Bucket
Status:
Conditions:
Status: True
Type: ACK.Adopted
Events: <none>
@RedbackThomson do you think this is a case of the adoption reconciler discovering an existing Bucket CR (that is in Terminal state) and setting the AdoptedResource to an ACK.Adopted state erroneously?
This line of the s3-controller indicates that the adoption spec field we need to use for adopting a bucket is nameOrId with the bucket name as the value.
It seems our generated code in resource.go isn't returning an error if the wrong spec identifier field is used (arn instead of nameOrId). Any behaviour past that point is just erroneous, and it's probably failing strangely with S3 because list-buckets takes no filter argument and our S3 code would be stuck in an infinite loop trying to describe the bucket using its name (which is an empty string here).
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
this is still an issue, is there something I can do to help debug?
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 60d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 60d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
Rotten issues close after 60d of inactivity.
Reopen the issue with /reopen.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/close
@ack-bot: Closing this issue.
In response to this:
Rotten issues close after 60d of inactivity. Reopen the issue with
/reopen. Provide feedback via https://github.com/aws-controllers-k8s/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.