cluster-api-provider-aws icon indicating copy to clipboard operation
cluster-api-provider-aws copied to clipboard

Refactor the reconciler creation

Open richardcase opened this issue 3 years ago • 21 comments

/kind feature /priority backlog /area testing /triage accepted

Describe the solution you'd like As part of reviewing #3073 there was a suggestion that we change how we create instances of our reconcilers and how we inject mocked dependencies for testing purposes so that we can remove utility functions like this:

func (r *AWSClusterReconciler) getEC2Service(scope scope.EC2Scope) services.EC2MachineInterface {
	if r.ec2ServiceFactory != nil {
		return r.ec2ServiceFactory(scope)
	}
	return ec2.NewService(scope)
}

The suggestion was that we should handle this as part of creating an istnace if the reconcilers and the code inside them shouldn't be aware / have any logic around creation of service clients. Something along the lines of this code (not tested / doesn't compile):

type NewClusterReconcilerInput struct {
	Manager ctrl.Manager
	WatchFilterValue string
	Endpoints []scope.ServiceEndpoint
}

type NewClusterReconcilerServices struct {
	EC2ServiceFactory     func(scope.EC2Scope) services.EC2MachineInterface
	NetworkServiceFactory func(scope.ClusterScope) services.NetworkInterface
	ElbServiceFactory     func(scope.ELBScope) services.ELBInterface
	SecurityGroupFactory  func(scope.ClusterScope) services.SecurityGroupInterface
}

func NewClusterReconciler(input NewClusterReconcilerInput) reconcile.Reconciler {
	return &awsClusterReconciler{
		Client:           input.Manager.GetClient(),
		Recorder:         input.Manager.GetEventRecorderFor("awscluster-controller"),
		Endpoints:        input.Endpoints,
		WatchFilterValue: input.WatchFilterValue,

		ec2ServiceFactory: ec2.NewService,
		elbServiceFactory: elb.NewService,
		networkServiceFactory: network.NewService,
		securityGroupFactory: securitygroup.NewService,
	}
}

func NewClusterReconcilerWithServices(input NewClusterReconcilerInput, svcs NewClusterReconcilerServices ) reconcile.Reconciler {
	return &awsClusterReconciler{
		Client:           input.Manager.GetClient(),
		Recorder:         input.Manager.GetEventRecorderFor("awscluster-controller"),
		Endpoints:        input.Endpoints,
		WatchFilterValue: input.WatchFilterValue,

		ec2ServiceFactory: svcs.EC2ServiceFactory,
		elbServiceFactory: svcs.ElbServiceFactory,
		networkServiceFactory: svcs.NetworkServiceFactory,
		securityGroupFactory: svcs.SecurityGroupFactory,
	}
}

// awsClusterReconciler reconciles a AwsCluster object.
type awsClusterReconciler struct {
	client.Client
	Recorder         record.EventRecorder
	Endpoints        []scope.ServiceEndpoint
	WatchFilterValue string

	ec2ServiceFactory     func(scope.EC2Scope) services.EC2MachineInterface
	networkServiceFactory func(scope.ClusterScope) services.NetworkInterface
	elbServiceFactory     func(scope.ELBScope) services.ELBInterface
	securityGroupFactory  func(scope.ClusterScope) services.SecurityGroupInterface
}

Or using functional options:

type ReconcilerOption func(*AWSReconciler)

type EC2ServiceFactory func(scope.EC2Scope) services.EC2MachineInterface

func WithEC2ServiceFactory(fn EC2ServiceFactory) ReconcilerOption {
	return func(r *AWSReconciler) {
		r.EC2ServiceFactory  = fn
	}
}


func NewClusterReconciler(input NewClusterReconcilerInput, opt ...ReconcilerOption) reconcile.Reconciler {
	clusterReconciler := &awsClusterReconciler{
		Client:           input.Manager.GetClient(),
		Recorder:         input.Manager.GetEventRecorderFor("awscluster-controller"),
		Endpoints:        input.Endpoints,
		WatchFilterValue: input.WatchFilterValue,

		EC2ServiceFactory: ec2.NewService,
		ELBServiceFactory: elb.NewService,
		NetworkServiceFactory: network.NewService,
		SecurityGroupFactory: securitygroup.NewService,
	}

	for _, opt := range opt {
		opt(clusterReconciler)
	}

	return clusterReconciler
}

Anything else you would like to add:

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

richardcase avatar Feb 09 '22 13:02 richardcase

/assign

richardcase avatar Feb 09 '22 13:02 richardcase

A direct benefit this change will have: https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/3159#discussion_r803460544

sedefsavas avatar Feb 10 '22 19:02 sedefsavas

@sedefsavas then shall we wait for this change to merge before merging/starting integration tests for AWSCluster and AWSMachine controllers?

Ankitasw avatar Feb 11 '22 06:02 Ankitasw

@sedefsavas then shall we wait for this change to merge before merging/starting integration tests for AWSCluster and AWSMachine controllers?

If this will help with the integration tests then yes it may be worth waiting. But if not I can always refractor them, no problem

richardcase avatar Feb 11 '22 06:02 richardcase

No need to wait, we can refactor IMO.

sedefsavas avatar Feb 11 '22 08:02 sedefsavas

No need to wait, we can refactor IMO.

Yes, what I meant was we first need this refactoring and then rebase changes in integration tests and we need these tests as soon as possible. If @richardcase doesn't have any efforts as of now, I am happy to do it.

Ankitasw avatar Feb 11 '22 08:02 Ankitasw

@Ankitasw - I have already started on the this issue and the refactor. Which reminds me I forgot to add:

/lifecycle active

richardcase avatar Feb 11 '22 08:02 richardcase

I didn't get much further due to time constraints so:

/unassign

richardcase avatar Mar 23 '22 13:03 richardcase

@Ankitasw - is this something you'd still be interested in working on?

richardcase avatar Mar 23 '22 13:03 richardcase

@richardcase I can do it, if you have not started

Ankitasw avatar Mar 23 '22 14:03 Ankitasw

@richardcase there are two options suggested in the issue, which one we should move forward with? I am ok with any of the approach.

Ankitasw avatar Mar 23 '22 14:03 Ankitasw

I prefer the functional options route and thats what i had made a start on. But whichever you feel is best

richardcase avatar Mar 23 '22 14:03 richardcase

I think functional options is better if we want to add more services later, in that way the implementation would be flexible and wont require huge changes everytime we add new service factory.

Ankitasw avatar Mar 23 '22 14:03 Ankitasw

/assign

Ankitasw avatar Mar 23 '22 14:03 Ankitasw

@richardcase I have questions regarding this implementation:

  • As mentioned in example, NewClusterReconciler should be used while controller creation in main.go. If that is the case, what opts we will be passing from there? If we pass opts like WithEC2ServiceFactory, we would need scope objects there, but we wont have scope created by then.
  • We cannot pass ec2.NewService as mentioned below as it will not allow to use that function because of return type incompatibility(Cannot use 'ec2.NewService' (type func(clusterScope scope.EC2Scope) *Service) as the type func(scope.EC2Scope) services.EC2Interface)
func NewClusterReconciler(input NewClusterReconcilerInput, opt ...ReconcilerOption) reconcile.Reconciler {
	clusterReconciler := &awsClusterReconciler{
		Client:           input.Manager.GetClient(),
		Recorder:         input.Manager.GetEventRecorderFor("awscluster-controller"),
		Endpoints:        input.Endpoints,
		WatchFilterValue: input.WatchFilterValue,

		EC2ServiceFactory: ec2.NewService,
		ELBServiceFactory: elb.NewService,
		NetworkServiceFactory: network.NewService,
		SecurityGroupFactory: securitygroup.NewService,
	}
	for _, opt := range opt {
		opt(clusterReconciler)
	}
	return clusterReconciler
}

Ankitasw avatar Mar 24 '22 10:03 Ankitasw

Update: I got some idea after thinking out loud, maybe I will try it once and raise a WIP PR to get feedback.

Ankitasw avatar Mar 24 '22 13:03 Ankitasw

Update: I got some idea after thinking out loud, maybe I will try it once and raise a WIP PR to get feedback.

Great :) The sample code i gave was just a brain dump....i never actually tested it. So i'm keen to see the idea you had :smile:

richardcase avatar Mar 24 '22 13:03 richardcase

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 22 '22 13:06 k8s-triage-robot

/lifecycle frozen

richardcase avatar Jun 22 '22 15:06 richardcase

/remove-lifecycle frozen

richardcase avatar Jul 12 '22 16:07 richardcase

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 10 '22 16:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 09 '22 17:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 09 '22 17:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 09 '22 17:12 k8s-ci-robot