helm-controller icon indicating copy to clipboard operation
helm-controller copied to clipboard

Refactor Helm controller

Open aiyengar2 opened this issue 3 years ago • 3 comments

Makes a number of changes to refactor the k3s-io/helm-controller for usage in rancher/helm-project-operator

aiyengar2 avatar Apr 07 '22 16:04 aiyengar2

This is a HUGE PR. Will it have any chance to be merged? What about making many smaller PR to help maintainer to review the propositions?

gbonnefille avatar Apr 12 '22 06:04 gbonnefille

Hi @gbonnefille, this PR is still a work in progress so I’m not sure yet. But splitting it into multiple PRs sounds like a good idea!

aiyengar2 avatar Apr 12 '22 15:04 aiyengar2

@brandond sorry for the delay! Rebased the current version of helm-controller to the latest master, can you take a look when you get a chance? Responded to the comments too.

aiyengar2 avatar Aug 12 '22 23:08 aiyengar2

@brandond if this is helpful, the contents of pkg/controller/chart/chart.go is just a renamed version of pkg/helm/controller.go with a bunch of code rewritten. The GH diff makes it seem like it is a completely new file, which it is not. It's the old file with some bug fixes and predominantly changes in import naming conventions (as well as some constants and certain function calls like creating an event broadcaster moved to other files).

The diff looks like this:

--- -	2022-08-30 11:34:20.000000000 -0700
+++ pkg/controllers/chart/chart.go	2022-08-30 11:30:02.000000000 -0700
@@ -1,4 +1,4 @@
-package helm
+package chart
 
 import (
 	"context"
@@ -10,54 +10,36 @@
 	"strings"
 	"time"
 
-	helmv1 "github.com/k3s-io/helm-controller/pkg/apis/helm.cattle.io/v1"
+	v1 "github.com/k3s-io/helm-controller/pkg/apis/helm.cattle.io/v1"
 	helmcontroller "github.com/k3s-io/helm-controller/pkg/generated/controllers/helm.cattle.io/v1"
+	"github.com/k3s-io/helm-controller/pkg/remove"
 	"github.com/rancher/wrangler/pkg/apply"
 	batchcontroller "github.com/rancher/wrangler/pkg/generated/controllers/batch/v1"
 	corecontroller "github.com/rancher/wrangler/pkg/generated/controllers/core/v1"
 	rbaccontroller "github.com/rancher/wrangler/pkg/generated/controllers/rbac/v1"
-	"github.com/rancher/wrangler/pkg/objectset"
+	"github.com/rancher/wrangler/pkg/generic"
 	"github.com/rancher/wrangler/pkg/relatedresource"
-	"github.com/rancher/wrangler/pkg/schemes"
-	"github.com/sirupsen/logrus"
 	batch "k8s.io/api/batch/v1"
-	core "k8s.io/api/core/v1"
-	v1 "k8s.io/api/core/v1"
+	corev1 "k8s.io/api/core/v1"
 	rbac "k8s.io/api/rbac/v1"
 	"k8s.io/apimachinery/pkg/api/errors"
-	meta "k8s.io/apimachinery/pkg/apis/meta/v1"
+	apierrors "k8s.io/apimachinery/pkg/api/errors"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
 	"k8s.io/apimachinery/pkg/runtime"
 	"k8s.io/apimachinery/pkg/types"
 	"k8s.io/apimachinery/pkg/util/intstr"
 	"k8s.io/client-go/kubernetes"
-	typedv1 "k8s.io/client-go/kubernetes/typed/core/v1"
 	"k8s.io/client-go/tools/record"
 	"k8s.io/utils/pointer"
 )
 
-var (
-	commaRE              = regexp.MustCompile(`\\*,`)
-	deletePolicy         = meta.DeletePropagationForeground
-	DefaultJobImage      = "rancher/klipper-helm:v0.7.3-build20220613"
-	DefaultFailurePolicy = FailurePolicyReinstall
-)
-
-type Controller struct {
-	namespace      string
-	helmController helmcontroller.HelmChartController
-	confController helmcontroller.HelmChartConfigController
-	jobsCache      batchcontroller.JobCache
-	apply          apply.Apply
-	recorder       record.EventRecorder
-}
-
 const (
 	Label         = "helmcharts.helm.cattle.io/chart"
 	Annotation    = "helmcharts.helm.cattle.io/configHash"
 	Unmanaged     = "helmcharts.helm.cattle.io/unmanaged"
+	ManagedBy     = "helmcharts.cattle.io/managed-by"
 	CRDName       = "helmcharts.helm.cattle.io"
 	ConfigCRDName = "helmchartconfigs.helm.cattle.io"
-	Name          = "helm-controller"
 
 	TaintExternalCloudProvider = "node.cloudprovider.kubernetes.io/uninitialized"
 	LabelNodeRolePrefix        = "node-role.kubernetes.io/"
@@ -68,169 +50,296 @@
 	FailurePolicyAbort     = "abort"
 )
 
+var (
+	commaRE              = regexp.MustCompile(`\\*,`)
+	deletePolicy         = metav1.DeletePropagationForeground
+	DefaultJobImage      = "rancher/klipper-helm:v0.7.3-build20220613"
+	DefaultFailurePolicy = FailurePolicyReinstall
+)
+
+type Controller struct {
+	systemNamespace string
+	managedBy       string
+	helms           helmcontroller.HelmChartController
+	helmCache       helmcontroller.HelmChartCache
+	confs           helmcontroller.HelmChartConfigController
+	confCache       helmcontroller.HelmChartConfigCache
+	jobs            batchcontroller.JobController
+	jobCache        batchcontroller.JobCache
+	apply           apply.Apply
+	recorder        record.EventRecorder
+}
+
 func Register(ctx context.Context,
+	systemNamespace, managedBy string,
 	k8s kubernetes.Interface,
 	apply apply.Apply,
+	recorder record.EventRecorder,
 	helms helmcontroller.HelmChartController,
+	helmCache helmcontroller.HelmChartCache,
 	confs helmcontroller.HelmChartConfigController,
+	confCache helmcontroller.HelmChartConfigCache,
 	jobs batchcontroller.JobController,
+	jobCache batchcontroller.JobCache,
 	crbs rbaccontroller.ClusterRoleBindingController,
 	sas corecontroller.ServiceAccountController,
 	cm corecontroller.ConfigMapController) {
-	apply = apply.WithSetID(Name).
+
+	c := &Controller{
+		systemNamespace: systemNamespace,
+		managedBy:       managedBy,
+		helms:           helms,
+		helmCache:       helmCache,
+		confs:           confs,
+		confCache:       confCache,
+		jobs:            jobs,
+		jobCache:        jobCache,
+		recorder:        recorder,
+	}
+
+	c.apply = apply.
 		WithCacheTypes(helms, confs, jobs, crbs, sas, cm).
-		WithStrictCaching().WithPatcher(batch.SchemeGroupVersion.WithKind("Job"), func(namespace, name string, pt types.PatchType, data []byte) (runtime.Object, error) {
-		err := jobs.Delete(namespace, name, &meta.DeleteOptions{PropagationPolicy: &deletePolicy})
-		if err == nil {
-			return nil, fmt.Errorf("replace job")
-		}
-		return nil, err
+		WithStrictCaching().
+		WithPatcher(batch.SchemeGroupVersion.WithKind("Job"), c.jobPatcher)
+
+	relatedresource.Watch(ctx, "resolve-helm-chart-from-config", c.resolveHelmChartFromConfig, helms, confs)
+
+	// Why do we need to add the managedBy string to the generatingHandlerName?
+	//
+	// By default, generating handlers use the name of the controller as the set ID for the wrangler.apply operation
+	// Therefore, if multiple iterations of the helm-controller are using the same set ID, they will try to overwrite each other's
+	// resources since each controller will detect the other's set as resources that need to be cleaned up to apply the new set
+	//
+	// To resolve this, we simply prefix the provided managedBy string to the generatingHandler controller's name only to ensure that the
+	// set ID specified will only target this particular controller
+	generatingHandlerName := fmt.Sprintf("%s-chart-registration", managedBy)
+	helmcontroller.RegisterHelmChartGeneratingHandler(ctx, helms, c.apply, "", generatingHandlerName, c.OnChange, &generic.GeneratingHandlerOptions{
+		AllowClusterScoped: true,
 	})
 
-	relatedresource.Watch(ctx, "helm-pod-watch",
-		func(namespace, name string, obj runtime.Object) ([]relatedresource.Key, error) {
-			if job, ok := obj.(*batch.Job); ok {
-				name := job.Labels[Label]
-				if name != "" {
-					return []relatedresource.Key{
-						{
-							Name:      name,
-							Namespace: namespace,
-						},
-					}, nil
-				}
+	remove.RegisterScopedOnRemoveHandler(ctx, helms, "on-helm-chart-remove",
+		func(key string, obj runtime.Object) (bool, error) {
+			if obj == nil {
+				return false, nil
 			}
-			return nil, nil
+			helmChart, ok := obj.(*v1.HelmChart)
+			if !ok {
+				return false, nil
+			}
+			return c.shouldManage(helmChart)
 		},
-		helms,
-		confs,
-		jobs)
+		helmcontroller.FromHelmChartHandlerToHandler(c.OnRemove),
+	)
 
-	eventBroadcaster := record.NewBroadcaster()
-	eventBroadcaster.StartLogging(logrus.Infof)
-	eventBroadcaster.StartRecordingToSink(&typedv1.EventSinkImpl{Interface: k8s.CoreV1().Events(meta.NamespaceSystem)})
-	eventSource := v1.EventSource{Component: Name}
-	if nodeName := os.Getenv("NODE_NAME"); nodeName != "" {
-		eventSource.Host = nodeName
-	}
+	relatedresource.Watch(ctx, "resolve-helm-chart-owned-resources",
+		relatedresource.OwnerResolver(true, v1.SchemeGroupVersion.String(), "HelmChart"),
+		helms,
+		jobs, crbs, sas, cm,
+	)
+}
 
-	controller := &Controller{
-		helmController: helms,
-		confController: confs,
-		jobsCache:      jobs.Cache(),
-		apply:          apply,
-		recorder:       eventBroadcaster.NewRecorder(schemes.All, eventSource),
+func (c *Controller) jobPatcher(namespace, name string, pt types.PatchType, data []byte) (runtime.Object, error) {
+	err := c.jobs.Delete(namespace, name, &metav1.DeleteOptions{PropagationPolicy: &deletePolicy})
+	if err == nil || apierrors.IsNotFound(err) {
+		return nil, fmt.Errorf("create or replace job")
 	}
-
-	helms.OnChange(ctx, Name, controller.OnHelmChange)
-	helms.OnRemove(ctx, Name, controller.OnHelmRemove)
-	confs.OnChange(ctx, Name, controller.OnConfChange)
-	confs.OnRemove(ctx, Name, controller.OnConfChange)
+	return nil, err
 }
 
-func (c *Controller) OnHelmChange(key string, chart *helmv1.HelmChart) (*helmv1.HelmChart, error) {
-	if chart == nil {
+func (c *Controller) resolveHelmChartFromConfig(namespace, name string, obj runtime.Object) ([]relatedresource.Key, error) {
+	if len(c.systemNamespace) > 0 && namespace != c.systemNamespace {
+		// do nothing if it's not in the namespace this controller was registered with
 		return nil, nil
 	}
-	if chart.Spec.Chart == "" && chart.Spec.ChartContent == "" {
-		return chart, nil
-	}
-	if _, ok := chart.Annotations[Unmanaged]; ok {
-		return chart, nil
-	}
-
-	failurePolicy := DefaultFailurePolicy
-	objs := objectset.NewObjectSet()
-	job, valuesConfigMap, contentConfigMap := job(chart)
-	objs.Add(serviceAccount(chart))
-	objs.Add(roleBinding(chart))
-
-	if chart.Spec.FailurePolicy != "" {
-		failurePolicy = chart.Spec.FailurePolicy
-	}
-
-	if config, err := c.confController.Cache().Get(chart.Namespace, chart.Name); err != nil {
-		if !errors.IsNotFound(err) {
-			return chart, err
+	if conf, ok := obj.(*v1.HelmChartConfig); ok {
+		chart, err := c.helmCache.Get(conf.Namespace, conf.Name)
+		if err != nil {
+			if !errors.IsNotFound(err) {
+				return nil, err
+			}
 		}
-	} else if config != nil {
-		valuesConfigMapAddConfig(valuesConfigMap, config)
-		if config.Spec.FailurePolicy != "" {
-			failurePolicy = config.Spec.FailurePolicy
+		if chart == nil {
+			return nil, nil
 		}
+		return []relatedresource.Key{
+			{
+				Name:      conf.Name,
+				Namespace: conf.Namespace,
+			},
+		}, nil
 	}
+	return nil, nil
+}
 
-	setFailurePolicy(job, failurePolicy)
-	hashConfigMaps(job, contentConfigMap, valuesConfigMap)
-
-	objs.Add(contentConfigMap)
-	objs.Add(valuesConfigMap)
-	objs.Add(job)
+func (c *Controller) OnChange(chart *v1.HelmChart, chartStatus v1.HelmChartStatus) ([]runtime.Object, v1.HelmChartStatus, error) {
+	if shouldManage, err := c.shouldManage(chart); err != nil {
+		return nil, chartStatus, err
+	} else if !shouldManage {
+		return nil, chartStatus, nil
+	}
+	if chart.DeletionTimestamp != nil {
+		// this should only be called if the chart is being installed or upgraded
+		return nil, chartStatus, nil
+	}
 
-	c.recorder.Eventf(chart, core.EventTypeNormal, "ApplyJob", "Applying HelmChart using Job %s/%s", job.Namespace, job.Name)
-	if err := c.apply.WithOwner(chart).Apply(objs); err != nil {
-		return chart, err
+	job, objs, err := c.getJobAndRelatedResources(chart)
+	if err != nil {
+		return nil, chartStatus, err
 	}
+	// update status
+	chartStatus.JobName = job.Name
 
-	chartCopy := chart.DeepCopy()
-	chartCopy.Status.JobName = job.Name
-	return c.helmController.Update(chartCopy)
+	// emit an event to indicate that this Helm chart is being applied
+	c.recorder.Eventf(chart, corev1.EventTypeNormal, "ApplyJob", "Applying HelmChart using Job %s/%s", job.Namespace, job.Name)
+
+	return append(objs, job), chartStatus, nil
 }
 
-func (c *Controller) OnHelmRemove(key string, chart *helmv1.HelmChart) (*helmv1.HelmChart, error) {
+func (c *Controller) OnRemove(key string, chart *v1.HelmChart) (*v1.HelmChart, error) {
 	if chart == nil {
 		return nil, nil
 	}
-	if chart.Spec.Chart == "" {
-		return chart, nil
+
+	expectedJob, objs, err := c.getJobAndRelatedResources(chart)
+	if err != nil {
+		return nil, err
 	}
-	if _, ok := chart.Annotations[Unmanaged]; ok {
-		return chart, nil
+
+	// note: on the logic of running an apply here...
+	// if the uninstall job does not exist, it will create it
+	// if the job already exists and it is uninstalling, nothing will change since there's no need to patch
+	// if the job already exists but is tied to an install or upgrade, there will be a need to patch so
+	// the apply will execute the jobPatcher, which will delete the install/upgrade job and recreate a uninstall job
+	err = generic.ConfigureApplyForObject(c.apply, chart, &generic.GeneratingHandlerOptions{
+		AllowClusterScoped: true,
+	}).
+		WithOwner(chart).
+		WithSetID("helm-chart-registration").
+		ApplyObjects(append(objs, expectedJob)...)
+	if err != nil {
+		return nil, err
 	}
 
-	job, _, _ := job(chart)
-	job, err := c.jobsCache.Get(chart.Namespace, job.Name)
+	// sleep for 3 seconds to give the job time to perform the helm install
+	// before emitting any errors
+	time.Sleep(3 * time.Second)
 
+	// once we have run the above logic, we can now check if the job is complete
+	job, err := c.jobCache.Get(chart.Namespace, expectedJob.Name)
 	if errors.IsNotFound(err) {
-		_, err := c.OnHelmChange(key, chart)
-		if err != nil {
-			return chart, err
-		}
+		// the above apply should have created it, something is wrong.
+		// if you are here, there must be a bug in the code.
+		return chart, fmt.Errorf("could not perform uninstall: expected job %s/%s to exist after apply, but not found", chart.Namespace, expectedJob.Name)
 	} else if err != nil {
 		return chart, err
 	}
 
+	// the first time we call this, the job will definitely not be complete... however,
+	// throwing an error here will re-enqueue this controller, which will process the apply again
+	// and get the job object from the cache to check again
 	if job.Status.Succeeded <= 0 {
-		return chart, fmt.Errorf("waiting for delete of helm chart for %s by %s", key, job.Name)
+		// temporarily recreate the chart, but keep the deletion timestamp
+		chartCopy := chart.DeepCopy()
+		chartCopy.Status.JobName = job.Name
+		newChart, err := c.helms.Update(chartCopy)
+		if err != nil {
+			return chart, fmt.Errorf("unable to update status of helm chart to add uninstall job name %s", chartCopy.Status.JobName)
+		}
+		return newChart, fmt.Errorf("waiting for delete of helm chart for %s by %s", key, job.Name)
 	}
 
-	chartCopy := chart.DeepCopy()
-	chartCopy.Status.JobName = job.Name
-	newChart, err := c.helmController.Update(chartCopy)
+	// uninstall job has successfully finished!
+	c.recorder.Eventf(chart, corev1.EventTypeNormal, "RemoveJob", "Uninstalled HelmChart using Job %s/%s, removing resources", job.Namespace, job.Name)
 
+	// note: an empty apply removes all resources owned by this chart
+	err = generic.ConfigureApplyForObject(c.apply, chart, &generic.GeneratingHandlerOptions{
+		AllowClusterScoped: true,
+	}).
+		WithOwner(chart).
+		WithSetID("helm-chart-registration").
+		ApplyObjects()
 	if err != nil {
-		return newChart, err
+		return nil, fmt.Errorf("unable to remove resources tied to HelmChart %s/%s: %s", chart.Namespace, chart.Name, err)
 	}
 
-	return newChart, c.apply.WithOwner(newChart).Apply(objectset.NewObjectSet())
+	return chart, nil
 }
 
-func (c *Controller) OnConfChange(key string, conf *helmv1.HelmChartConfig) (*helmv1.HelmChartConfig, error) {
-	if conf == nil {
-		return nil, nil
+func (c *Controller) shouldManage(chart *v1.HelmChart) (bool, error) {
+	if chart == nil {
+		return false, nil
+	}
+	if len(c.systemNamespace) > 0 && chart.Namespace != c.systemNamespace {
+		// do nothing if it's not in the namespace this controller was registered with
+		return false, nil
 	}
+	if chart.Spec.Chart == "" && chart.Spec.ChartContent == "" {
+		return false, nil
+	}
+	if chart.Annotations != nil {
+		if _, ok := chart.Annotations[Unmanaged]; ok {
+			return false, nil
+		}
+		managedBy, ok := chart.Annotations[ManagedBy]
+		if ok {
+			// if the label exists, only handle this if the managedBy label matches that of this controller
+			return managedBy == c.managedBy, nil
+		}
+	}
+	// The managedBy label does not exist, so we trigger claiming the HelmChart
+	// We then return false since this update will automatically retrigger an OnChange operation
+	chartCopy := chart.DeepCopy()
+	if chartCopy.Annotations == nil {
+		chartCopy.SetAnnotations(map[string]string{
+			ManagedBy: c.managedBy,
+		})
+	} else {
+		chartCopy.Annotations[ManagedBy] = c.managedBy
+	}
+	_, err := c.helms.Update(chartCopy)
+	return false, err
+}
+
+func (c *Controller) getJobAndRelatedResources(chart *v1.HelmChart) (*batch.Job, []runtime.Object, error) {
+	// set a default failure policy
+	failurePolicy := DefaultFailurePolicy
+	if chart.Spec.FailurePolicy != "" {
+		failurePolicy = chart.Spec.FailurePolicy
+	}
+
+	// get the default job and configmaps
+	job, valuesConfigMap, contentConfigMap := job(chart)
 
-	if chart, err := c.helmController.Cache().Get(conf.Namespace, conf.Name); err != nil {
+	// check if a HelmChartConfig is registered for this Helm chart
+	config, err := c.confCache.Get(chart.Namespace, chart.Name)
+	if err != nil {
 		if !errors.IsNotFound(err) {
-			return conf, err
+			return nil, nil, err
 		}
-	} else if chart != nil {
-		c.helmController.EnqueueAfter(conf.Namespace, conf.Name, time.Second)
 	}
-	return conf, nil
+	if config != nil {
+		// Merge the values into the HelmChart's values
+		valuesConfigMapAddConfig(valuesConfigMap, config)
+		// Override the failure policy to what is provided in the HelmChartConfig
+		if config.Spec.FailurePolicy != "" {
+			failurePolicy = config.Spec.FailurePolicy
+		}
+	}
+	// set the failure policy and add additional annotations to the job
+	// note: the purpose of the additional annotation is to cause the job to be destroyed
+	// and recreated if the hash of the HelmChartConfig changes while it is being processed
+	setFailurePolicy(job, failurePolicy)
+	hashConfigMaps(job, contentConfigMap, valuesConfigMap)
+
+	return job, []runtime.Object{
+		valuesConfigMap,
+		contentConfigMap,
+		serviceAccount(chart),
+		roleBinding(chart),
+	}, nil
 }
 
-func job(chart *helmv1.HelmChart) (*batch.Job, *core.ConfigMap, *core.ConfigMap) {
+func job(chart *v1.HelmChart) (*batch.Job, *corev1.ConfigMap, *corev1.ConfigMap) {
 	jobImage := strings.TrimSpace(chart.Spec.JobImage)
 	if jobImage == "" {
 		jobImage = DefaultJobImage
@@ -247,11 +356,11 @@
 	}
 
 	job := &batch.Job{
-		TypeMeta: meta.TypeMeta{
+		TypeMeta: metav1.TypeMeta{
 			APIVersion: "batch/v1",
 			Kind:       "Job",
 		},
-		ObjectMeta: meta.ObjectMeta{
+		ObjectMeta: metav1.ObjectMeta{
 			Name:      fmt.Sprintf("helm-%s-%s", action, chart.Name),
 			Namespace: chart.Namespace,
 			Labels: map[string]string{
@@ -260,22 +369,22 @@
 		},
 		Spec: batch.JobSpec{
 			BackoffLimit: pointer.Int32Ptr(1000),
-			Template: core.PodTemplateSpec{
-				ObjectMeta: meta.ObjectMeta{
+			Template: corev1.PodTemplateSpec{
+				ObjectMeta: metav1.ObjectMeta{
 					Annotations: map[string]string{},
 					Labels: map[string]string{
 						Label: chart.Name,
 					},
 				},
-				Spec: core.PodSpec{
-					RestartPolicy: core.RestartPolicyOnFailure,
-					Containers: []core.Container{
+				Spec: corev1.PodSpec{
+					RestartPolicy: corev1.RestartPolicyOnFailure,
+					Containers: []corev1.Container{
 						{
 							Name:            "helm",
 							Image:           jobImage,
-							ImagePullPolicy: core.PullIfNotPresent,
+							ImagePullPolicy: corev1.PullIfNotPresent,
 							Args:            args(chart),
-							Env: []core.EnvVar{
+							Env: []corev1.EnvVar{
 								{
 									Name:  "NAME",
 									Value: chart.Name,
@@ -318,45 +427,45 @@
 	}
 
 	if chart.Spec.Timeout != nil {
-		job.Spec.Template.Spec.Containers[0].Env = append(job.Spec.Template.Spec.Containers[0].Env, core.EnvVar{
+		job.Spec.Template.Spec.Containers[0].Env = append(job.Spec.Template.Spec.Containers[0].Env, corev1.EnvVar{
 			Name:  "TIMEOUT",
 			Value: chart.Spec.Timeout.Duration.String(),
 		})
 	}
 
 	job.Spec.Template.Spec.NodeSelector = make(map[string]string)
-	job.Spec.Template.Spec.NodeSelector[core.LabelOSStable] = "linux"
+	job.Spec.Template.Spec.NodeSelector[corev1.LabelOSStable] = "linux"
 
 	if chart.Spec.Bootstrap {
 		job.Spec.Template.Spec.NodeSelector[LabelNodeRolePrefix+LabelControlPlaneSuffix] = "true"
 		job.Spec.Template.Spec.HostNetwork = true
-		job.Spec.Template.Spec.Tolerations = []core.Toleration{
+		job.Spec.Template.Spec.Tolerations = []corev1.Toleration{
 			{
-				Key:    core.TaintNodeNotReady,
-				Effect: core.TaintEffectNoSchedule,
+				Key:    corev1.TaintNodeNotReady,
+				Effect: corev1.TaintEffectNoSchedule,
 			},
 			{
 				Key:      TaintExternalCloudProvider,
-				Operator: core.TolerationOpEqual,
+				Operator: corev1.TolerationOpEqual,
 				Value:    "true",
-				Effect:   core.TaintEffectNoSchedule,
+				Effect:   corev1.TaintEffectNoSchedule,
 			},
 			{
 				Key:      "CriticalAddonsOnly",
-				Operator: core.TolerationOpExists,
+				Operator: corev1.TolerationOpExists,
 			},
 			{
 				Key:      LabelNodeRolePrefix + LabelEtcdSuffix,
-				Operator: core.TolerationOpExists,
-				Effect:   core.TaintEffectNoExecute,
+				Operator: corev1.TolerationOpExists,
+				Effect:   corev1.TaintEffectNoExecute,
 			},
 			{
 				Key:      LabelNodeRolePrefix + LabelControlPlaneSuffix,
-				Operator: core.TolerationOpExists,
-				Effect:   core.TaintEffectNoSchedule,
+				Operator: corev1.TolerationOpExists,
+				Effect:   corev1.TaintEffectNoSchedule,
 			},
 		}
-		job.Spec.Template.Spec.Containers[0].Env = append(job.Spec.Template.Spec.Containers[0].Env, []core.EnvVar{
+		job.Spec.Template.Spec.Containers[0].Env = append(job.Spec.Template.Spec.Containers[0].Env, []corev1.EnvVar{
 			{
 				Name:  "KUBERNETES_SERVICE_HOST",
 				Value: "127.0.0.1"},
@@ -376,13 +485,13 @@
 	return job, valueConfigMap, contentConfigMap
 }
 
-func valuesConfigMap(chart *helmv1.HelmChart) *core.ConfigMap {
-	var configMap = &core.ConfigMap{
-		TypeMeta: meta.TypeMeta{
+func valuesConfigMap(chart *v1.HelmChart) *corev1.ConfigMap {
+	var configMap = &corev1.ConfigMap{
+		TypeMeta: metav1.TypeMeta{
 			APIVersion: "v1",
 			Kind:       "ConfigMap",
 		},
-		ObjectMeta: meta.ObjectMeta{
+		ObjectMeta: metav1.ObjectMeta{
 			Name:      fmt.Sprintf("chart-values-%s", chart.Name),
 			Namespace: chart.Namespace,
 		},
@@ -399,19 +508,19 @@
 	return configMap
 }
 
-func valuesConfigMapAddConfig(configMap *core.ConfigMap, config *helmv1.HelmChartConfig) {
+func valuesConfigMapAddConfig(configMap *corev1.ConfigMap, config *v1.HelmChartConfig) {
 	if config.Spec.ValuesContent != "" {
 		configMap.Data["values-10_HelmChartConfig.yaml"] = config.Spec.ValuesContent
 	}
 }
 
-func roleBinding(chart *helmv1.HelmChart) *rbac.ClusterRoleBinding {
+func roleBinding(chart *v1.HelmChart) *rbac.ClusterRoleBinding {
 	return &rbac.ClusterRoleBinding{
-		TypeMeta: meta.TypeMeta{
+		TypeMeta: metav1.TypeMeta{
 			APIVersion: "rbac.authorization.k8s.io/v1",
 			Kind:       "ClusterRoleBinding",
 		},
-		ObjectMeta: meta.ObjectMeta{
+		ObjectMeta: metav1.ObjectMeta{
 			Name: fmt.Sprintf("helm-%s-%s", chart.Namespace, chart.Name),
 		},
 		RoleRef: rbac.RoleRef{
@@ -429,13 +538,13 @@
 	}
 }
 
-func serviceAccount(chart *helmv1.HelmChart) *core.ServiceAccount {
-	return &core.ServiceAccount{
-		TypeMeta: meta.TypeMeta{
+func serviceAccount(chart *v1.HelmChart) *corev1.ServiceAccount {
+	return &corev1.ServiceAccount{
+		TypeMeta: metav1.TypeMeta{
 			APIVersion: "v1",
 			Kind:       "ServiceAccount",
 		},
-		ObjectMeta: meta.ObjectMeta{
+		ObjectMeta: metav1.ObjectMeta{
 			Name:      fmt.Sprintf("helm-%s", chart.Name),
 			Namespace: chart.Namespace,
 		},
@@ -443,7 +552,7 @@
 	}
 }
 
-func args(chart *helmv1.HelmChart) []string {
+func args(chart *v1.HelmChart) []string {
 	if chart.DeletionTimestamp != nil {
 		return []string{
 			"delete",
@@ -528,7 +637,7 @@
 		if len(proxyEnvValue) == 0 {
 			continue
 		}
-		envar := core.EnvVar{
+		envar := corev1.EnvVar{
 			Name:  proxyEnv,
 			Value: proxyEnvValue,
 		}
@@ -538,13 +647,13 @@
 	}
 }
 
-func contentConfigMap(chart *helmv1.HelmChart) *core.ConfigMap {
-	configMap := &core.ConfigMap{
-		TypeMeta: meta.TypeMeta{
+func contentConfigMap(chart *v1.HelmChart) *corev1.ConfigMap {
+	configMap := &corev1.ConfigMap{
+		TypeMeta: metav1.TypeMeta{
 			APIVersion: "v1",
 			Kind:       "ConfigMap",
 		},
-		ObjectMeta: meta.ObjectMeta{
+		ObjectMeta: metav1.ObjectMeta{
 			Name:      fmt.Sprintf("chart-content-%s", chart.Name),
 			Namespace: chart.Namespace,
 		},
@@ -559,21 +668,21 @@
 	return configMap
 }
 
-func setValuesConfigMap(job *batch.Job, chart *helmv1.HelmChart) *core.ConfigMap {
+func setValuesConfigMap(job *batch.Job, chart *v1.HelmChart) *corev1.ConfigMap {
 	configMap := valuesConfigMap(chart)
 
-	job.Spec.Template.Spec.Volumes = append(job.Spec.Template.Spec.Volumes, core.Volume{
+	job.Spec.Template.Spec.Volumes = append(job.Spec.Template.Spec.Volumes, corev1.Volume{
 		Name: "values",
-		VolumeSource: core.VolumeSource{
-			ConfigMap: &core.ConfigMapVolumeSource{
-				LocalObjectReference: core.LocalObjectReference{
+		VolumeSource: corev1.VolumeSource{
+			ConfigMap: &corev1.ConfigMapVolumeSource{
+				LocalObjectReference: corev1.LocalObjectReference{
 					Name: configMap.Name,
 				},
 			},
 		},
 	})
 
-	job.Spec.Template.Spec.Containers[0].VolumeMounts = append(job.Spec.Template.Spec.Containers[0].VolumeMounts, core.VolumeMount{
+	job.Spec.Template.Spec.Containers[0].VolumeMounts = append(job.Spec.Template.Spec.Containers[0].VolumeMounts, corev1.VolumeMount{
 		MountPath: "/config",
 		Name:      "values",
 	})
@@ -581,24 +690,24 @@
 	return configMap
 }
 
-func setContentConfigMap(job *batch.Job, chart *helmv1.HelmChart) *core.ConfigMap {
+func setContentConfigMap(job *batch.Job, chart *v1.HelmChart) *corev1.ConfigMap {
 	configMap := contentConfigMap(chart)
 	if configMap == nil {
 		return nil
 	}
 
-	job.Spec.Template.Spec.Volumes = append(job.Spec.Template.Spec.Volumes, core.Volume{
+	job.Spec.Template.Spec.Volumes = append(job.Spec.Template.Spec.Volumes, corev1.Volume{
 		Name: "content",
-		VolumeSource: core.VolumeSource{
-			ConfigMap: &core.ConfigMapVolumeSource{
-				LocalObjectReference: core.LocalObjectReference{
+		VolumeSource: corev1.VolumeSource{
+			ConfigMap: &corev1.ConfigMapVolumeSource{
+				LocalObjectReference: corev1.LocalObjectReference{
 					Name: configMap.Name,
 				},
 			},
 		},
 	})
 
-	job.Spec.Template.Spec.Containers[0].VolumeMounts = append(job.Spec.Template.Spec.Containers[0].VolumeMounts, core.VolumeMount{
+	job.Spec.Template.Spec.Containers[0].VolumeMounts = append(job.Spec.Template.Spec.Containers[0].VolumeMounts, corev1.VolumeMount{
 		MountPath: "/chart",
 		Name:      "content",
 	})
@@ -607,13 +716,13 @@
 }
 
 func setFailurePolicy(job *batch.Job, failurePolicy string) {
-	job.Spec.Template.Spec.Containers[0].Env = append(job.Spec.Template.Spec.Containers[0].Env, core.EnvVar{
+	job.Spec.Template.Spec.Containers[0].Env = append(job.Spec.Template.Spec.Containers[0].Env, corev1.EnvVar{
 		Name:  "FAILURE_POLICY",
 		Value: failurePolicy,
 	})
 }
 
-func hashConfigMaps(job *batch.Job, maps ...*core.ConfigMap) {
+func hashConfigMaps(job *batch.Job, maps ...*corev1.ConfigMap) {
 	hash := sha256.New()
 
 	for _, configMap := range maps {
@@ -628,4 +737,4 @@
 	}
 
 	job.Spec.Template.ObjectMeta.Annotations[Annotation] = fmt.Sprintf("SHA256=%X", hash.Sum(nil))
-}
\ No newline at end of file
+}

aiyengar2 avatar Aug 30 '22 18:08 aiyengar2

I've tried to approach this a few times, but honestly it's too much for me personally to review in a single PR. 42 commits and 3k changed files is huge. GitHub won't even preview all the changes as it's too many for their compare UI to load.

Can you perhaps break this down into a couple high-level goals, with a separate PR for each, as was discussed in earlier comments? In the individual PRs, it might also be possible to squash some of the commits, as it seems that some of the current commits contain changes (refactoring or bugfixes) to code that was added earlier in the same PR.

Maybe break them down into:

  1. Remove vendoring
  2. Upgrade golang version and modules
  3. Update makefiles and codegen
  4. Clean up imports and reorganize existing code without changing code itself
  5. Refactor the controller

brandond avatar Aug 31 '22 21:08 brandond

I've tried to approach this a few times, but honestly it's too much for me personally to review in a single PR. 42 commits and 3k changed files is huge. GitHub won't even preview all the changes as it's too many for their compare UI to load.

Can you perhaps break this down into a couple high-level goals, with a separate PR for each, as was discussed in earlier comments? In the individual PRs, it might also be possible to squash some of the commits, as it seems that some of the current commits contain changes (refactoring or bugfixes) to code that was added earlier in the same PR.

Maybe break them down into:

  1. Remove vendoring
  2. Upgrade golang version and modules
  3. Update makefiles and codegen
  4. Clean up imports and reorganize existing code without changing code itself
  5. Refactor the controller

will do. PRs incoming.

I'll leave this open to all of the PRs have been made and merged!

aiyengar2 avatar Sep 07 '22 18:09 aiyengar2

https://github.com/k3s-io/helm-controller/pull/158 is the final PR that brings master to parity with the contents of this PR, so I will be closing out this PR in favor of that PR.

aiyengar2 avatar Sep 08 '22 17:09 aiyengar2