kubebuilder
kubebuilder copied to clipboard
Requeue parent objects upon changes to child object objects in integration tests
We are making a custom controller with multiple CRDs. Controller A reconciles objects of type A and creates objects of type B (a different in-house CRD). When running controller A, anytime a change is made to the child object (B), the parent object is requeued and will run through the reconcile loop again. However, this does not happen when running the integration tests for controller A.
In our tests, we create an instance of object A, and controller A then creates an instance of object B. We then manually update the status of B to be marked as ready, which should cause controller A to reconcile object A again (since it was a change to the child object). This second reconcile loop never happens because kubebuilder doesn't automatically requeue the parent object in the integration tests like it does when running the controller in the cluster. Is there a setting that we need to add when creating our CRD Reconciler in suite_test.go, or is there something else that we can enable to get requeues working in our integration tests?
Hi @bleech1,
Could you please provide further information about this one?
a) Are you using a fake client or envtest to implement your tests? b) What are you trying to do? Could you provide an example with the code and the full details for anyone to be able to check and reproduce your scenario? What are the steps? c) What are you checking? What do you expect to see? What are the circumstances d) Are we able to reproduce it with the latest/master branch versions or are you using old ones?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Closing this one since we have no interaction and further information as requested. However, please feel free to open new issues if you need.