cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Unit tests should not need to call reconcile()
In the unit tests that use testenv, instance creation should trigger reconciliation automatically and hence we don't need to trigger reconcile().
For this, we need to add the reconciler by calling SetupWithManager() similar to: https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/76ea0c9445caa2ecbdcf97a24dcaf0433a6f8139/exp/instancestate/awsinstancestate_controller_test.go#L87
/kind cleanup /help /good-first-issue
@sedefsavas I would like to contribute. can you please guide me further.
/assign @Srikrishnabh
@Srikrishnabh Right now, we do not attach controllers to envtest manager in most suite_test.go where we setup the testing environment. And hence when we create an object, its reconciler is not triggered and we trigger it by calling the function like here: https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/5721067c87ec597c8b7816d9e79851ca6bad5129/controllers/awsmachine_controller_test.go#L187
If we start the controller by calling SetupWithManager(), then just creating an AWSMachine will trigger AWSMachine controller's reconcile(). There are similar example in core cluster-api repo as well: https://github.com/kubernetes-sigs/cluster-api/blob/8d1396821ca9bdb46b4dc3b63344f72f934bf443/controllers/machine_controller_test.go#L152
Feel free to ping me on slack if you have questions.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
/assign @mevech
@sedefsavas i would like to work on this issue if that is okay?
Welcome to the community @mevech! Feel free to reach out on slack channel if you have questions.
@sedefsavas Trying to understand the scope of this issue.
Out of the all the _test.go files under controllers/ dir, it looks only awscluster_controller_test.go is the one that requires the adjustment after adding its reconciler to the testEnv manager. The other tests in controllers dir are using mocked interfaces / fake-mocked k8s client and not using testEnv client.
OR
Is this issue to setup all reconcilers with testEnv.manager and swap all fake-k8s-Clients from tests to testEnv/envTest client ? Which may be a bigger change ( For example, in machine_controller_test resource names will have to be different for each test or else in tearDown we will have to waitTillDeleted before going to the next test case )
Once the above scope is clear, I would like to take stab at this.
/priority backlog /triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Hey @mevech are you still working on this issue?
It looks like no-one is working on this issue.
/assign
@Ankitasw IIRC there were some difficulties triggering reconcile method by just creating the object in the recent tests you added, instead reconcileNormal method was being called. Is this still a good first issue?
Good point @sedefsavas will drop good first issue for now and would see if it's still possible to do this. /unassign @KushalBeniwal /remove-good-first-issue
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.