terraform-provider-rancher2 icon indicating copy to clipboard operation
terraform-provider-rancher2 copied to clipboard

Rancher 2.5 Fleet/Continious Delivery Resources/Datasources

Open mitchellmaler opened this issue 5 years ago • 5 comments

Request to add resources and datasources to manage fleet/continuous delivery in Rancher 2.5+

mitchellmaler avatar Nov 04 '20 02:11 mitchellmaler

I'd like the ability to specify a fleet workspace for cluster imports on the rancher2_cluster resource.

tsproull avatar Dec 09 '20 01:12 tsproull

As a rough workaround, you can set up your cluster groups and git repos from terraform using the kubernetes_manifest resource from the kuberentes-alpha provider (eventually to be moved to the "real" kubernetes provider).

It's not a very nice process, but it does allow you to manage Fleet configuration from Terraform for now, while waiting for an update to the rancher2 provider (or a new Fleet-specific provider).

philomory avatar May 21 '21 23:05 philomory

Is there likely to be any work even to add the fleet workspace parameter? That doesn't seem hard, and we would be happy to knock up a PR if it's likely to get released.

sgran avatar Jul 09 '21 08:07 sgran

@sgran , more than happy to accept a PR with this feature

rawmind0 avatar Jul 09 '21 15:07 rawmind0

I have successfully used the below in the terraform and it works, as long as Rancher is deployed. However, when you install from scratch and if your Rancher install is part of the same TF folder as your k8s manifest, you will get the below error.

Code:

# Create creds for use with Cont Delivery
resource "rancher2_secret_v2" "github-creds" {
  depends_on = [
    rancher2_bootstrap.admin
  ]
  cluster_id = "local"
  name = var.cred-gh-name
  namespace = var.fleet_namespace
  type = "kubernetes.io/ssh-auth"
  data = {
      ssh-publickey = base64decode(local.gh_creds.gh_ssh_pub)
      ssh-privatekey = base64decode(local.gh_creds.gh_ssh_priv)
  }
}

# Add Longhorn for Cont Delivery
resource "kubernetes_manifest" "contdel-longhorn" {
  depends_on = [
    rancher2_secret_v2.github-creds,
    helm_release.helm_rancher,
    null_resource.wait4kubcfg
  ]
  manifest = {
    "apiVersion" = "fleet.cattle.io/v1alpha1"
    "kind"       = "GitRepo"
    "metadata" = {
      "name"      = "longhorn"
      "annotations" = {
        "field.cattle.io/description" = "Persistent Storage for Rancher clusters"
      }
      "namespace" = var.fleet_namespace
    }
    "spec" = {
      "branch" = "main"
      "clientSecretName" = var.cred-gh-name
      "insecureSkipTLSVerify": "true"
      "paths" = [
        "/helm/longhorn"
      ]
      "repo" = var.fleet_gh_url
      "targets" = [{
        "clusterSelector" = {}
          }]
    }
  }
}

Error:

│ Error: Failed to determine GroupVersionResource for manifest
│ 
│   with kubernetes_manifest.contdel-longhorn,
│   on rancher_contdelivery.tf line 19, in resource "kubernetes_manifest" "contdel-longhorn":
│   19: resource "kubernetes_manifest" "contdel-longhorn" {
│ 
│ no matches for kind "GitRepo" in group "fleet.cattle.io

It appears that this is a bug in the kubernetes_manifest, as you can see in this issue: https://github.com/hashicorp/terraform-provider-kubernetes/issues/1367

It looks like people are working around this by creating a CRD but I have not tried that yet.

bennysp avatar Nov 12 '21 20:11 bennysp