opensearch-k8s-operator icon indicating copy to clipboard operation
opensearch-k8s-operator copied to clipboard

Option for `additionalConfig` to update the `opensearch.yml`

Open prudhvigodithi opened this issue 2 years ago • 1 comments

Problem Statement

Coming from the comment https://github.com/Opster/opensearch-k8s-operator/issues/204#issuecomment-1264798551. Passing additionalConfig to the yaml, does not update the opensearch.yml file directly, but add as an env value to the generated statefulset, one way this solves the purpose to pass the additional config to the cluster via environment values, but leaves a gap for additional config that has to be directly passed to the opensearch.yml.

Solution Proposed:

Have a mechanism to pass the additionalConfig directly to opensearch.yml and reload/restart the cluster. Have a way for a user to distinguish if passed additionalConfig should be as an env or inject to opensearch.yml, so this way the existing setup is not broken and new support to inject to opensearch.yml is solved. @idanl21 @swoehrl-mw @segalziv @dbason and all other community folks, please add your thoughts

prudhvigodithi avatar Oct 03 '22 14:10 prudhvigodithi

Have a way for a user to distinguish if passed additionalConfig should be as an env or inject to opensearch.yml

I think we should not give the user a choice to avoid more complexity (both in the implementation and for the user) and just only use the opensearch.yml and basically treat it as an implementation detail.

Sidenote: Using opensearch.yml would also fix the problem with the empty roles list from #187.

swoehrl-mw avatar Oct 04 '22 07:10 swoehrl-mw

Hi everybody 😄 After several attempts of solving this I keep getting to an infinite reconcile loop when trying to create a volume per nodepool for the opensearch.yml files. This is the important part of the volume logic:

	// Add config map with opensearch.yml
	volumeName := fmt.Sprintf("%s-volume", cmName)
	volumes = append(volumes, corev1.Volume{
		Name: volumeName,
		VolumeSource: corev1.VolumeSource{
			ConfigMap: &corev1.ConfigMapVolumeSource{
				LocalObjectReference: corev1.LocalObjectReference{
					Name: cmName,
				},
			},
		},
	})

	volumeMounts = append(volumeMounts, corev1.VolumeMount{
		Name:      volumeName,
		MountPath: "/usr/share/opensearch/config/opensearch.yml",
		SubPath:   "opensearch.yml",
	})

If I understand the issue correctly, the error stems from pkg/reconcilers/configuration.go, which creates the base opensearch.yml file, and adjusts the calculated NodePoolHash causing an infinite reconcile loop, because the actual nodepool configuration is out of sync with the hashed version.

My attempt for the logic for still keeping the base configuration generated by configuration.go was this:

	for _, nodePool := range r.instance.Spec.NodePools {
		headlessService := builders.NewHeadlessServiceForNodePool(r.instance, &nodePool)
		result.CombineErr(ctrl.SetControllerReference(r.instance, headlessService, r.Client.Scheme()))
		result.Combine(r.ReconcileResource(headlessService, reconciler.StatePresent))

		cmName := fmt.Sprintf("%s-config-%s", r.instance.Name, nodePool.Component)

		baseConfig := r.reconcilerContext.OpenSearchConfig
		withGeneral := helpers.MergeConfigs(baseConfig, r.instance.Spec.General.AdditionalConfig)
		mergedConfigs := helpers.MergeConfigs(withGeneral, nodePool.AdditionalConfig)

		cm := builders.NewClusterConfigMapForCR(
			r.instance, cmName, mergedConfigs,
		)
		result.CombineErr(ctrl.SetControllerReference(r.instance, cm, r.Client.Scheme()))
		result.Combine(r.ReconcileResource(cm, reconciler.StatePresent))

		result.Combine(r.reconcileNodeStatefulSet(nodePool, username, cmName))
	}

Any ideas on how to tackle this @swoehrl-mw ?

lieberlois avatar Nov 28 '22 12:11 lieberlois

I don't think even it's added to environment variable. I have this in my general section, ideally as per this MR bootstrap POD should honour the general additionalconfig setting but it doesn't show the setting in env variable.

  general:
    version: 2.3.0
    httpPort: 9200
    vendor: opensearch
    serviceName: test-poc
    setVMMaxMapCount: true
    additionalConfig:
      plugins.security.ssl.transport.enforce_hostname_verification: "false"

Also as suggested in MR I tried to use the following but it gives error in validation.

spec:
  bootstrap:
    additionalConfig:
      plugins.security.ssl.transport.enforce_hostname_verification: "false"

Error

error: error validating "opensearch-cluster-securityconfig_sefo-poc_inbuilt.yaml": error validating data: ValidationError(OpenSearchCluster.spec.bootstrap): unknown field "additionalConfig" in io.opster.opensearch.v1.OpenSearchCluster.spec.bootstrap; if you choose to ignore these errors, turn validation off with --validate=false

One more issue why it doesnt allow the boolean values that's why I am using false as string.

The OpenSearchCluster "test-poc" is invalid: spec.general.additionalConfig.plugins.security.ssl.transport.enforce_hostname_verification: Invalid value: "boolean": spec.general.additionalConfig.plugins.security.ssl.transport.enforce_hostname_verification in body must be of type string: "boolean"

ervikrant06 avatar Dec 07 '22 10:12 ervikrant06

Hi @ervikrant06. The PR you linked is not yet part of a release, as such you will only get that functionality if you are building the operator yourself from master.

One more issue why it doesnt allow the boolean values that's why I am using false as string.

This is a limitation of the type definitions we are using to define the CRD. Providing things like booleans or integers as strings is the normal way for this.

swoehrl-mw avatar Dec 07 '22 13:12 swoehrl-mw

Thanks @swoehrl-mw

After building the image from master I can see the custom setting reflected in env variable, will continue discussion in original bug report.

[opensearch@test-poc-bootstrap-0 ~]$ env | grep -i hostname_
plugins.security.ssl.transport.enforce_hostname_verification=false

ervikrant06 avatar Dec 19 '22 11:12 ervikrant06

Closing the issue :)

idanl21 avatar Mar 21 '23 10:03 idanl21