cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Refactor the reconciler creation
/kind feature /priority backlog /area testing /triage accepted
Describe the solution you'd like As part of reviewing #3073 there was a suggestion that we change how we create instances of our reconcilers and how we inject mocked dependencies for testing purposes so that we can remove utility functions like this:
func (r *AWSClusterReconciler) getEC2Service(scope scope.EC2Scope) services.EC2MachineInterface {
if r.ec2ServiceFactory != nil {
return r.ec2ServiceFactory(scope)
}
return ec2.NewService(scope)
}
The suggestion was that we should handle this as part of creating an istnace if the reconcilers and the code inside them shouldn't be aware / have any logic around creation of service clients. Something along the lines of this code (not tested / doesn't compile):
type NewClusterReconcilerInput struct {
Manager ctrl.Manager
WatchFilterValue string
Endpoints []scope.ServiceEndpoint
}
type NewClusterReconcilerServices struct {
EC2ServiceFactory func(scope.EC2Scope) services.EC2MachineInterface
NetworkServiceFactory func(scope.ClusterScope) services.NetworkInterface
ElbServiceFactory func(scope.ELBScope) services.ELBInterface
SecurityGroupFactory func(scope.ClusterScope) services.SecurityGroupInterface
}
func NewClusterReconciler(input NewClusterReconcilerInput) reconcile.Reconciler {
return &awsClusterReconciler{
Client: input.Manager.GetClient(),
Recorder: input.Manager.GetEventRecorderFor("awscluster-controller"),
Endpoints: input.Endpoints,
WatchFilterValue: input.WatchFilterValue,
ec2ServiceFactory: ec2.NewService,
elbServiceFactory: elb.NewService,
networkServiceFactory: network.NewService,
securityGroupFactory: securitygroup.NewService,
}
}
func NewClusterReconcilerWithServices(input NewClusterReconcilerInput, svcs NewClusterReconcilerServices ) reconcile.Reconciler {
return &awsClusterReconciler{
Client: input.Manager.GetClient(),
Recorder: input.Manager.GetEventRecorderFor("awscluster-controller"),
Endpoints: input.Endpoints,
WatchFilterValue: input.WatchFilterValue,
ec2ServiceFactory: svcs.EC2ServiceFactory,
elbServiceFactory: svcs.ElbServiceFactory,
networkServiceFactory: svcs.NetworkServiceFactory,
securityGroupFactory: svcs.SecurityGroupFactory,
}
}
// awsClusterReconciler reconciles a AwsCluster object.
type awsClusterReconciler struct {
client.Client
Recorder record.EventRecorder
Endpoints []scope.ServiceEndpoint
WatchFilterValue string
ec2ServiceFactory func(scope.EC2Scope) services.EC2MachineInterface
networkServiceFactory func(scope.ClusterScope) services.NetworkInterface
elbServiceFactory func(scope.ELBScope) services.ELBInterface
securityGroupFactory func(scope.ClusterScope) services.SecurityGroupInterface
}
Or using functional options:
type ReconcilerOption func(*AWSReconciler)
type EC2ServiceFactory func(scope.EC2Scope) services.EC2MachineInterface
func WithEC2ServiceFactory(fn EC2ServiceFactory) ReconcilerOption {
return func(r *AWSReconciler) {
r.EC2ServiceFactory = fn
}
}
func NewClusterReconciler(input NewClusterReconcilerInput, opt ...ReconcilerOption) reconcile.Reconciler {
clusterReconciler := &awsClusterReconciler{
Client: input.Manager.GetClient(),
Recorder: input.Manager.GetEventRecorderFor("awscluster-controller"),
Endpoints: input.Endpoints,
WatchFilterValue: input.WatchFilterValue,
EC2ServiceFactory: ec2.NewService,
ELBServiceFactory: elb.NewService,
NetworkServiceFactory: network.NewService,
SecurityGroupFactory: securitygroup.NewService,
}
for _, opt := range opt {
opt(clusterReconciler)
}
return clusterReconciler
}
Anything else you would like to add:
Environment:
- Cluster-api-provider-aws version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
/assign
A direct benefit this change will have: https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/3159#discussion_r803460544
@sedefsavas then shall we wait for this change to merge before merging/starting integration tests for AWSCluster and AWSMachine controllers?
@sedefsavas then shall we wait for this change to merge before merging/starting integration tests for AWSCluster and AWSMachine controllers?
If this will help with the integration tests then yes it may be worth waiting. But if not I can always refractor them, no problem
No need to wait, we can refactor IMO.
No need to wait, we can refactor IMO.
Yes, what I meant was we first need this refactoring and then rebase changes in integration tests and we need these tests as soon as possible. If @richardcase doesn't have any efforts as of now, I am happy to do it.
@Ankitasw - I have already started on the this issue and the refactor. Which reminds me I forgot to add:
/lifecycle active
I didn't get much further due to time constraints so:
/unassign
@Ankitasw - is this something you'd still be interested in working on?
@richardcase I can do it, if you have not started
@richardcase there are two options suggested in the issue, which one we should move forward with? I am ok with any of the approach.
I prefer the functional options route and thats what i had made a start on. But whichever you feel is best
I think functional options is better if we want to add more services later, in that way the implementation would be flexible and wont require huge changes everytime we add new service factory.
/assign
@richardcase I have questions regarding this implementation:
- As mentioned in example,
NewClusterReconcilershould be used while controller creation in main.go. If that is the case, what opts we will be passing from there? If we pass opts likeWithEC2ServiceFactory, we would need scope objects there, but we wont have scope created by then. - We cannot pass ec2.NewService as mentioned below as it will not allow to use that function because of return type incompatibility(Cannot use 'ec2.NewService' (type func(clusterScope scope.EC2Scope) *Service) as the type func(scope.EC2Scope) services.EC2Interface)
func NewClusterReconciler(input NewClusterReconcilerInput, opt ...ReconcilerOption) reconcile.Reconciler {
clusterReconciler := &awsClusterReconciler{
Client: input.Manager.GetClient(),
Recorder: input.Manager.GetEventRecorderFor("awscluster-controller"),
Endpoints: input.Endpoints,
WatchFilterValue: input.WatchFilterValue,
EC2ServiceFactory: ec2.NewService,
ELBServiceFactory: elb.NewService,
NetworkServiceFactory: network.NewService,
SecurityGroupFactory: securitygroup.NewService,
}
for _, opt := range opt {
opt(clusterReconciler)
}
return clusterReconciler
}
Update: I got some idea after thinking out loud, maybe I will try it once and raise a WIP PR to get feedback.
Update: I got some idea after thinking out loud, maybe I will try it once and raise a WIP PR to get feedback.
Great :) The sample code i gave was just a brain dump....i never actually tested it. So i'm keen to see the idea you had :smile:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
/remove-lifecycle frozen
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.