terraform-provider-linode icon indicating copy to clipboard operation
terraform-provider-linode copied to clipboard

[Bug]: Unclear how to remove a specific LKE node pool when both have identical configuration

Open ilyasotkov opened this issue 3 years ago • 5 comments

Terraform Version

Terraform v1.1.5 on linux_amd64

Linode Provider Version

v1.25.2

Effected Terraform Resources

linode_lke_cluster

Terraform Config Files

resource "linode_lke_cluster" "lke_cluster" {
  k8s_version = "1.22"
  region      = "eu-central"
  label       = "my-cluster-${var.env}"
  pool {
    type  = "g6-standard-1"
    count = 1
  }
  pool {
    type  = "g6-standard-1"
    count = 1
  }
}

Debug Output

No response

Panic Output

No response

Expected Behavior

I need to remove a specific node pool (either the first or the second). However, since there's no required label for node pools and it's not a separate resource type, removing either of the

  pool {
    type  = "g6-standard-1"
    count = 1
  }

blocks results in 100% identical configuration. It's then up to the Terraform / Linode provider to decide which pool of the two is getting removed. I couldn't figure out how I can control this behavior.

Actual Behavior

I'm able to remove a specific node pool without blind guesswork or going to the UI/CLI then trying to sync my Terraform state.

Steps to Reproduce


ilyasotkov avatar Feb 20 '22 19:02 ilyasotkov

Just noticed this PR #569, seems like it would resolve the issue I've described here, hopefully it gets merged soon :)

ilyasotkov avatar Feb 20 '22 20:02 ilyasotkov

Hello, thanks for the feedback!

As you mentioned, PR #569 should allow you to uniquely identify node pools using tags. We still plan on looking back into this PR but I don't have an ETA quite yet.

We have also had some discussions about possibly creating a linode_lke_node_pool resource although I can't guarantee this functionality can be added due to some architectural constraints.

I'll let you know when we have any updates on this issue :+1:

LBGarber avatar Feb 21 '22 14:02 LBGarber

We have also had some discussions about possibly creating a linode_lke_node_pool resource although I can't guarantee this functionality can be added due to some architectural constraints.

This would be great - in order to minimise the overhead of cluster-wide services, we currently deploy a node pool per instance instead of a cluster. At the moment we have to maintain and co-ordinate these changes between various modules, it would greatly simplify the workflow if we could add node pools without having to specify a full LKE cluster.

bbetter173 avatar Sep 07 '22 01:09 bbetter173

I'm facing the same issue. I need to get nodes of the pool, and currently there's no way how I can filter those. Since during pool creation I can't neither specify labels/tags and etc of the nodes of the pool. I expect something like this.

resource "linode_lke_cluster" "pychat" {
  k8s_version = var.k8s_version
  label       = var.linode_app_label
  region      = var.region

  pool {
    tags  = ["pychat_linode"]
    count = var.node_count
    type  = var.node_type
  }
}

data "linode_instances" "pychat_linode" {
  filter {
    name   = "tags"
    values = ["pychat_linode"]
  }
}

This support was added in this pr but it's closed. Any updates on this one? I can work on PR if needed

akoidan avatar May 28 '23 11:05 akoidan

We have also had some discussions about possibly creating a linode_lke_node_pool resource although I can't guarantee this functionality can be added due to some architectural constraints.

I would like to strongly add my support for this feature. 😄 Quite a few workflows I'd like to use are limited by the inability to control node pool separately from the cluster.

lseelenbinder avatar Jul 13 '23 12:07 lseelenbinder

This issue seems to have been resolved by the introduction of the linode_lke_node_pool resource. Feel free to @ me if you feel this issue is still relevant.

lgarber-akamai avatar Apr 11 '24 15:04 lgarber-akamai