terraform-provider-ssh icon indicating copy to clipboard operation
terraform-provider-ssh copied to clipboard

Invalid new value

Open Eric-Fontana-Indico opened this issue 3 years ago • 7 comments

When expanding the plan for ssh_resource.install-command to include new values learned so far during apply, provider "registry.terraform.io/loafoe/ssh" produced an invalid new value for .host: was cty.StringVal("13.57.41.78"), but now cty.StringVal("54.153.12.15").

This is a bug in the provider, which should be reported in the provider's own issue tracker.

Eric-Fontana-Indico avatar Sep 27 '22 12:09 Eric-Fontana-Indico

@Eric-Fontana-Indico thanks for reporting. Is it possible to share the declaration of this install-command ?

loafoe avatar Sep 27 '22 13:09 loafoe

Sure, here it is:


resource "ssh_resource" "install-command" {
  depends_on = [
    aws_instance.sn,
    ssh_resource.install-smoketest-values,
    ssh_resource.install-snapshot-values
  ]
  when = "create"

  private_key = tls_private_key.oskey.private_key_pem
  host        = aws_instance.sn.public_ip
  user        = var.instance_username

  file {
    content     = <<-EOF
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: indico-rwx-store
  labels:
    type: local
spec:
  storageClassName: ""
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/mnt/data"

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: read-write
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: ""
---
apiVersion: v1
data:
  .dockerconfigjson: "${var.harbor_pull_secret_b64}"
kind: Secret
metadata:
  annotations:
    reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
    reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
  name: harbor-pull-secret
  namespace: default
type: kubernetes.io/dockerconfigjson
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: indico-sc
provisioner: rancher.io/local-path
reclaimPolicy: Retain
volumeBindingMode: Immediate
EOF
    destination = "/tmp/harbor-pull-secret.yaml"
    permissions = "0700"
  }

  file {
    content     = <<-EOF
cert-manager:
  cainjector:
    nodeSelector:
      kubernetes.io/os: linux
  enabled: true
  installCRDs: true
  nodeSelector:
    kubernetes.io/os: linux
  webhook:
    nodeSelector:
      kubernetes.io/os: linux
crunchy-pgo:
  enabled: true
EOF
    destination = "/tmp/ipa-crds-values.yaml"
    permissions = "0700"
  }

  file {
    content     = <<-EOF
aws-fsx-csi-driver:
  enabled: false
cluster-autoscaler:
  enabled: false
crunchy-postgres:
  enabled: true
  postgres-data:
    metadata:
      annotations:
        reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
    enabled: true
    imagePullSecrets:
      - name: harbor-pull-secret
    users:
    - databases:
      - noct
      - cyclone
      - crowdlabel
      - moonbow
      - elmosfire
      - elnino
      - sunbow
      - doctor
      - meteor
      name: indico
      options: SUPERUSER CREATEROLE CREATEDB REPLICATION BYPASSRLS
    instances:
    - metadata:
        annotations:
          reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
          reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
      dataVolumeClaimSpec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 30Gi
        storageClassName: local-path
      name: pgha1
      replicas: 1
external-dns:
  aws:
    region: ${var.region}
    zoneType: public
  enabled: true
  policy: sync
  provider: aws
  sources:
  - service
  - ingress
  txtPrefix: ${var.label}-${var.region}
  txtOwnerId: sn-${var.label}-${var.region}-${var.aws_account}
metrics-server:
  enabled: false
rabbitmq:
  enabled: true
  hpa:
    enabled: false
  rabbitmq:
    replicaCount: 1
secrets:
  clusterIssuer:
    zerossl:
      create: true
      eabEmail: [email protected]
      eabKid: "${jsondecode(data.vault_kv_secret_v2.zerossl_data.data_json)["EAB_KID"]}"
      eabHmacKey: "${jsondecode(data.vault_kv_secret_v2.zerossl_data.data_json)["EAB_HMAC_KEY"]}"
  general:
    create: true
  rabbitmq:
    create: true
storage:
  existingPVC: true
  indicoStorageClass:
    enabled: false
EOF
    destination = "/tmp/ipa-pre-reqs-values.yaml"
    permissions = "0700"
  }

  file {
    destination = "/tmp/monitoring-values.yaml"
    permissions = "0700"
    content     = <<-EOF
authentication:
  ingressPassword: Monitoring123!
  ingressUsername: monitoring
global:
  host: ${local.dns_name}
ingress-nginx:
  enabled: true
  controller:
    service:
      type: NodePort
      nodePorts:
        https: 32443
        http: 32706
kube-prometheus-stack:    
  alertmanager:
    alertmanagerSpec:
      nodeSelector:
        node_group: static-workers
    ingress:
      paths:
        - /alertmanager
      annotations:
        cert-manager.io/cluster-issuer: zerossl
      hosts:
        - alertmanager-${local.dns_name}
      tls:
        - secretName: alertmanager-tls
          hosts:
            - alertmanager-${local.dns_name}
  prometheusOperator:
    nodeSelector:
      node_group: static-workers
  prometheus:
    prometheusSpec:
      nodeSelector:
        node_group: static-workers
      storageSpec:
        volumeClaimTemplate:
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 1Gi
            storageClassName: local-path
    ingress:
      annotations:
        cert-manager.io/cluster-issuer: zerossl
        nginx.ingress.kubernetes.io/auth-type: basic
        nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
        nginx.ingress.kubernetes.io/auth-secret: prometheus-auth
      paths:
        - /prometheus
      hosts:
        - prometheus-${local.dns_name}
      tls:
        - secretName: prometheus-tls
          hosts:
            - prometheus-${local.dns_name}

  grafana:
    env:
      GF_SERVER_ROOT_URL: https://grafana-${local.dns_name}
    nodeSelector:
      node_group: static-workers
    ingress:
      path: /
      annotations:
        cert-manager.io/cluster-issuer: zerossl
      hosts:
        - grafana-${local.dns_name}
      tls:
      - secretName: grafana-tls
        hosts:
          - grafana-${local.dns_name}
EOF
  }


  file {
    destination = "/tmp/keda-monitoring-values.yaml"
    permissions = "0700"
    content     = <<-EOF
crds:
  install: true

podAnnotations:
  keda:
    prometheus.io/scrape: "true"
    prometheus.io/path: "/metrics"
    prometheus.io/port: "8080"
  metricsAdapter: 
    prometheus.io/scrape: "true"
    prometheus.io/path: "/metrics"
    prometheus.io/port: "9022"

prometheus:
  metricServer:
    enabled: true
    podMonitor:
      enabled: true
  operator:
    enabled: true
    podMonitor:
      enabled: true    
EOF 
  }


  file {
    destination = "/tmp/opentelemetry-monitoring-values.yaml"
    permissions = "0700"
    content     = <<-EOF
enabled: true
fullnameOverride: "collector-collector"
mode: deployment
tolerations:
- effect: NoSchedule
  key: indico.io/monitoring
  operator: Exists
nodeSelector:
  node_group: static-workers
ports:
  jaeger-compact:
    enabled: falseq
  jaeger-thrift:
    enabled: false
  jaeger-grpc:
    enabled: false
  zipkin:
    enabled: false

config:
  receivers:
    jaeger: null
    prometheus: null
    zipkin: null
  exporters:
    otlp:
      endpoint: monitoring-tempo.monitoring.svc:4317
      tls:
        insecure: true
  service:
    pipelines:
      traces:
        receivers:
          - otlp
        processors:
          - batch
        exporters:
          - otlp
      metrics: null
      logs: null    
EOF 
  }

  file {
    destination = "/tmp/ipa-user-values.yaml"
    permissions = "0700"
    content     = <<-EOF
${base64decode(var.ipa_values)}
EOF 
  }

  file {
    destination = "/tmp/ipa-values.yaml"
    permissions = "0700"
    content     = <<-EOF
app-edge:
  service:
    type: NodePort
    ports:
      http_port: ${var.app_http_port}
      https_port: ${var.app_https_port}
      http_api_port: ${var.app_api_port}
  
global:
  appDomains: 
    - ${local.dns_name}
rabbitmq:
  enabled: true
secrets:
  ocr_license_key: ${base64decode(data.vault_generic_secret.ocr-license.data["key"])}
server:
  resources:
    requests:
      cpu: 0
faust-worker:
  resources:
    requests:
      cpu: 0
worker:
  resources:
    requests:
      cpu: 0
  services:
    elnino-default:
      initContainer: 
        condition: "false" #Set to "true" for accord
        command: ["bash", "-c", "python3 elnino/database/migrations/populate_acord_v2.py"]
    customv2-predict:
      autoscaling:
        cooldownPeriod: 20
        minReplicas: '0'
      resources:
        requests: 
          cpu: 0
    customv2-train:
      autoscaling:
        cooldownPeriod: 20
      resources:
        requests: 
          cpu: 0
    cyclone-dataset:
      resources:
        requests: 
          cpu: 0
    cyclone-default:
      resources:
        requests: 
          cpu: 0
    cyclone-extract:
      resources:
        requests: 
          cpu: 0
    cyclone-featurize:
      resources:
        requests: 
          cpu: 0
    doc-splitting:
      resources:
        requests:
          cpu: 0 
    formextraction:
      enabled: false
      resources:
        requests:
          cpu: 0 
    glove-v1:
      enabled: false
      resources:
        requests:
          cpu: 0 
    imagefeatures-v2:
      autoscaling:
        cooldownPeriod: 20
      enabled: true
      resources:
        requests:
          cpu: 0 
    objectdetection-predict:
      enabled: false
      resources:
        requests:
          cpu: 0 
    objectdetection-train:
      enabled: false
      resources:
        requests:
          cpu: 0 
    pdfextraction-v2-predict:
      enabled: true
      resources:
        requests:
          cpu: 0 
    vdp:
      enabled: false
      resources:
        requests:
          cpu: 0 
cronjob:
  services:
    storage-cleanup:
      enabled: true
configs:
  allowed_origins: ALLOW_ALL
  postgres:
    app:
      user: indico
    metrics:
      user: indico
EOF 
  }


  file {
    content     = file("ipa-install.sh")
    destination = "/tmp/ipa-install.sh"
    permissions = "0700"
  }

  timeout = "60m"

  triggers = {
    "ts" : timestamp()
  }

  commands = [
    "/tmp/ipa-install.sh --ipa ${var.ipa_version} --ipa-crds ${var.ipa_crds_version} --ipa-pre-reqs ${var.ipa_pre_reqs_version} --keda ${var.keda_version} --monitoring ${var.monitoring_version} --smoketest ${var.ipa_smoketest_version} --snapshot-restore-version ${var.snapshot_restore_version}"
  ]
}

Eric-Fontana-Indico avatar Sep 27 '22 13:09 Eric-Fontana-Indico

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#public_ip

public_ip - Public IP address assigned to the instance, if applicable. NOTE: If you are using an aws_eip with your instance, you should refer to the EIP's address directly and not use public_ip as this field will change after the EIP is attached.

Looks like the public_ip changed during the run which is possible based on the above scenario, not sure if it's applicable and/or if you can use an aws_eip in between? Can you check?

loafoe avatar Sep 27 '22 13:09 loafoe

I'm avoiding using an Elastic IP we have too many restrictive quotas on them. And yes, the public_ip changes every run.

Eric-Fontana-Indico avatar Sep 27 '22 13:09 Eric-Fontana-Indico

My aws_instance is like:


resource "aws_instance" "sn" {
  ami           = var.aws_ami
  instance_type = var.aws_instance_type
  key_name      = aws_key_pair.key121.key_name


  iam_instance_profile = aws_iam_instance_profile.sn_profile.name

  associate_public_ip_address = true

  vpc_security_group_ids = [aws_security_group.allow_tls.id]

  subnet_id  = aws_subnet.public.id
  private_ip = var.private_ip_address

  connection {
    type        = "ssh"
    user        = var.instance_username
    private_key = tls_private_key.oskey.private_key_pem
    host        = aws_instance.sn.public_ip
  }

  provisioner "file" {
    source      = "config.toml.tmpl"
    destination = "/tmp/config.toml.tmpl"
  }

  user_data = templatefile("setup-instance.sh", {
    k3s_version = var.k3s_version
  })

  #provisioner "file" {
  #  source      = "setup-instance.sh"
  #  destination = "/tmp/setup-instance.sh"
  #}

  #provisioner "remote-exec" {
  #  inline = [
  #    "chmod +x /tmp/setup-instance.sh",
  #    "/tmp/setup-instance.sh --k3s ${var.k3s_version}"
  #  ]
  #}

  tags = {
    Name = var.label
  }
}

Eric-Fontana-Indico avatar Sep 27 '22 13:09 Eric-Fontana-Indico

So, the definition and usage pattern of the host field within the provider/core currently expects this to be fixed/stable during a run. There is also a ForceNew attribute enabled which might influence this as well. I'll need to check if I can drop the ForceNew without breaking behaviour.

For now my suggestion would still be to make the system available through an Elastic IP or to explore an alternative path for pushing the kubernetes resources to your cluster.

loafoe avatar Sep 27 '22 14:09 loafoe

@Eric-Fontana-Indico Looks like you are using k3s.

You have different possibilities to create the resources in K3S:

  • Put your YAML in/var/lib/rancher/k3s/server/manifests/ on the host for auto-deployment
  • Use the Terraform Kubernetes Provider (would need you to download the kubeconfig from the server first)
  • Adjust your setup-instance.sh to pull and apply your YAML

tuxpeople avatar Oct 06 '22 15:10 tuxpeople