falcon-operator icon indicating copy to clipboard operation
falcon-operator copied to clipboard

Allow setting tolerations and nodeSelector for Operator

Open CtrlAltDft opened this issue 11 months ago • 4 comments

Following the installation guide in this repo, we need to apply a single Kubernetes YAML file from the releases.

However, installing that way remove any possibility of customising configurations for the Operator Deployment. Can we have a more intuitive way to install Operator? Maybe split them out to individual files?

CtrlAltDft avatar Dec 11 '24 02:12 CtrlAltDft

Based on your comment, I would recommend you download the single manifest file, add your customization, maintain in git, and then deploy your custom version single manifest file from your git environment.

redhatrises avatar Dec 11 '24 02:12 redhatrises

That works. But users will not be able to benefit from the updates, we will have to self maintain and compare the changes when there are new releases.

For reference, in Terraform, we have a work around by filtering out the specific deployment. To add in Tolerations, NodeSelector, as well as modifying the CPU / Mem Requests / Limits

data "http" "falcon_operator_manifest" {
  url = "https://github.com/crowdstrike/falcon-operator/releases/latest/download/falcon-operator.yaml"
}

data "kubectl_file_documents" "falcon_operator_documents" {
  content = data.http.falcon_operator_manifest.response_body
}

resource "kubectl_manifest" "falcon_operator" {
  for_each = {
    for key, manifest in data.kubectl_file_documents.falcon_operator_documents.manifests :
    key => manifest
    if !(yamldecode(manifest)["metadata"]["name"] == "falcon-operator-controller-manager" && 
         yamldecode(manifest)["kind"] == "Deployment")
  }
  yaml_body = each.value
}

resource "kubectl_manifest" "falcon_operator_controller_manager" {
  yaml_body = yamlencode(
    merge(
      yamldecode(
        lookup(
          data.kubectl_file_documents.falcon_operator_documents.manifests,
          "/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
          "{}"
        )
      ),
      {
        "spec" = {
          "replicas" = lookup(
            yamldecode(
              lookup(
                data.kubectl_file_documents.falcon_operator_documents.manifests,
                "/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
                "{}"
              )
            )["spec"],
            "replicas",
            0
          )
          "selector" = lookup(
            yamldecode(
              lookup(
                data.kubectl_file_documents.falcon_operator_documents.manifests,
                "/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
                "{}"
              )
            )["spec"],
            "selector",
            {}
          )
          "template" = {
            "metadata" = lookup(
              yamldecode(
                lookup(
                  data.kubectl_file_documents.falcon_operator_documents.manifests,
                  "/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
                  "{}"
                )
              )["spec"]["template"],
              "metadata",
              {}
            )
            "spec" = merge(
              lookup(
                yamldecode(
                  lookup(
                    data.kubectl_file_documents.falcon_operator_documents.manifests,
                    "/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
                    "{}"
                  )
                )["spec"]["template"],
                "spec",
                {} # Default to empty object
              ),
              {
                "tolerations" = concat(
                  try(
                    lookup(
                      yamldecode(
                        lookup(
                          data.kubectl_file_documents.falcon_operator_documents.manifests,
                          "/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
                          "{}"
                        )
                      )["spec"]["template"]["spec"],
                      "tolerations",
                      []
                    ),
                    [] # Fallback to empty list if the key is missing or invalid
                  ),
                  [
                    {
                      key      = "<key>"
                      operator = "Equal"
                      value    = "<value>"
                      effect   = "NoSchedule"
                    }
                  ]
                ),
                "nodeSelector" = {
                  "<key>" = "<value>"
                },
                "containers" = [
                  merge(
                    yamldecode(
                      lookup(
                        data.kubectl_file_documents.falcon_operator_documents.manifests,
                        "/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
                        "{}"
                      )
                    )["spec"]["template"]["spec"]["containers"][0],
                    {
                      "resources" = {
                        "requests" = {
                          "cpu"    = "500m"
                          "memory" = "256Mi"
                        }
                        "limits" = {
                          "cpu"    = "500m"
                          "memory" = "256Mi"
                        }
                      }
                    }
                  )
                ]
              }
            )
          }
        }
      }
    )
  )
  depends_on = [
    kubectl_manifest.falcon_operator
  ]
}

But it'd be better if we refrain from this type of logic, in case there are unexpected changes in the future

CtrlAltDft avatar Dec 11 '24 08:12 CtrlAltDft

That works. But users will not be able to benefit from the updates, we will have to self maintain and compare the changes when there are new releases.

It is actually a best security and IT operations practice to use gitops practices with K8s and to review changes and updates all the while storing in git. Otherwise, how you are currently doing it in terraform is how you would do it.

redhatrises avatar Dec 13 '24 21:12 redhatrises

I recommend to use Kustomize with remote-bases to apply your patches. Create a file named falcon-operator/kustomization.yaml with this content:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - https://raw.githubusercontent.com/CrowdStrike/falcon-operator/refs/tags/v1.4.0/deploy/falcon-operator.yaml

patches:
  - patch: |
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: falcon-operator-controller-manager
        namespace: falcon-operator
      spec:
        replicas: 0
        template:
          spec:
            containers:
              - name: manager
                resources:
                  requests:
                    cpu: 500m
                    memory: 256Mi
                  limits:
                    cpu: 500m
                    memory: 256Mi

Deploy using kubectl apply -k ./falcon-operator. If you don't want to reference a resource on the internet, just download the release manifest and reference the local file instead.

I am surprised that this was not the first answer. Kustomize is standard Kubernetes tooling.

I would recommend you download the single manifest file, add your customization, maintain in git, and then deploy your custom version single manifest file from your git environment.

That sounds like a maintenance nightmare. Do you really want to manually diff your edits again whenever there is a new release of the Operator manifests?

It is actually a best security and IT operations practice to use gitops practices

Every Gitops tool supports Kustomize out of the box. You can use FluxCD, ArgoCD, Rancher Fleet etc. to deploy the Kustomization shown above.

ChristianCiach avatar Apr 02 '25 17:04 ChristianCiach