terraform-provider-ec icon indicating copy to clipboard operation
terraform-provider-ec copied to clipboard

Create separate resources for `user_settings_yaml` and `kibana_settings_yaml`

Open Jacendb opened this issue 3 years ago • 3 comments

Overview

Create separate resources for elasticsearch and kibana settings.

Configuring an Elastic Search cluster in the Elastic console can only be done after the domain has been created and deployed.

This would prevent terraform cycle errors when setting up OKTA SAML SSO integration. In my use case:

  • ec_deployment uses a template_file resource for user_settings_yaml in the elasticsearch block/section.
  • template_file refers to okta_app_saml for IDP medatada and IDP entity ID
  • okda_app_saml refers to ec_deployment kibana endpoints to create the SAML app :cyclone:

So Okta needs ec_deployment to be created, but ec_deployment needs okta to configure SSO since it is defined at creation time.

Possible Implementation

Create separate resources for elasticsearch and kibana settings. These should be applied after the ES domain is fully created and deployed.

Testing

This is the terraform code I'm using

# Elasticsearch
resource "ec_deployment" "elasticsearch" {
  name = "${var.foundation.name}-elasticsearch"

  region                 = data.ec_stack.elasticsearch.region
  version                = data.ec_stack.elasticsearch.version
  deployment_template_id = "gcp-storage-optimized"

  elasticsearch {
    config {
      user_settings_yaml = data.template_file.es-saml.rendered
    }
  }

  kibana {}
}
# template template
data "template_file" "es-saml" {
  template = file("${path.module}/templates/ec-saml.yml")
  vars = {
    IDP_METADATA  = okta_app_saml.okta_es_app.metadata_url
    IDP_ENTITY_ID = okta_app_saml.okta_es_app.entity_url
    SP_ENTITY_ID  = "${ec_deployment.elasticsearch.kibana[0].https_endpoint}/"
    SP_ACS        = "${ec_deployment.elasticsearch.kibana[0].https_endpoint}/api/security/v1/saml"
    SP_LOGOUT     = "${ec_deployment.elasticsearch.kibana[0].https_endpoint}/logout"
  }
}
# ES SAML app
resource "okta_app_saml" "okta_es_app" {
  label             = "${ec_deployment.elasticsearch.name}-kibana"
  sso_url           = "${ec_deployment.elasticsearch.kibana[0].https_endpoint}/api/security/v1/saml"
  recipient         = "${ec_deployment.elasticsearch.kibana[0].https_endpoint}/api/security/v1/saml"
  destination       = "${ec_deployment.elasticsearch.kibana[0].https_endpoint}/api/security/v1/saml"
  audience          = "${ec_deployment.elasticsearch.kibana[0].https_endpoint}/"
  assertion_signed  = true
  response_signed   = true
  honor_force_authn = true

  hide_ios                 = true
  hide_web                 = false
  saml_version             = "2.0"
  subject_name_id_template = "$${user.userName}"
  subject_name_id_format   = "urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified"
  signature_algorithm      = "RSA_SHA256"
  digest_algorithm         = "SHA256"
  authn_context_class_ref  = "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport"
}

Context

I currently can not fully automate an Elastic deployment with Okta SAML SSO. A workaround might allow to get this done after applying terraform twice which is very inconvenient when automating several big deployments.

I tried creating a local_file resource but obviously the dependency cycle is not removed.

On AWS Opensearch, SAML SSO configuration is done in a separate resource.

Your Environment

Terraform v1.0.0 Elastic provider v0.4.1 Okta Run under automation

Jacendb avatar Jul 12 '22 14:07 Jacendb

You should be able to use https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/elasticsearch_cluster_settings to provision the cluster after it is up and running, and add the configuration parameters that you want.

muresan avatar Jul 21 '22 17:07 muresan

I was under the impression elasticstack couldn't be used for this when using Elastic Cloud (ec_provider).

Jacendb avatar Jul 21 '22 17:07 Jacendb

Something like this should work, I did not test myself, but it should be as simple as:

provider "elasticstack" {
  elasticsearch {
    username  = ec_deployment.test.elasticsearch_username
    password  = ec_deployment.test.elasticsearch_password
    endpoints = [ec_deployment.test.0.https_endpoint]
  }
}

because elasticstack is generic, unless the elasticcloud deployments have the specific cluster management APIs locked (which I also did not test but it should not be the case).

muresan avatar Jul 21 '22 18:07 muresan

https://github.com/elastic/terraform-provider-ec/issues/433 is intended to fix this same issue. Due to the nature of the Cloud API it's not feasible to add these as separate resources at the moment.

tobio avatar Apr 27 '23 23:04 tobio