terraform-aws-eks
terraform-aws-eks copied to clipboard
try setting KUBERNETES_MASTER environment variable when trying to update from 17 to 18
Description
I'm trying to upgrade my eks module from v17.24 to 18.
There are a lot of breaking changes and I'm loosing connection to the cluster when triying to apply the update.
I have tried generating a kubeconfig localy but I still have issues reaching the cluster.
I read the migration guides without success of fixing most of my issues, so the first thing I want to do is to be able to reach the cluster and perform terraform plan.
- [ ] β I have searched the open/closed issues and my issue is not listed.
β οΈ Note
Before you submit an issue, please perform the following first:
- Remove the local
.terraformdirectory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/ - Re-initialize the project root to pull down modules:
terraform init - Re-attempt your terraform plan or apply and check if the issue still persists
Versions
-
Module version [Required]: 17.24
-
Terraform version:
Terraform v1.3.6
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v5.31.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.3.3
+ provider registry.terraform.io/hashicorp/helm v2.12.1
+ provider registry.terraform.io/hashicorp/kubernetes v2.24.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.2.2
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/hashicorp/tls v4.0.5
+ provider registry.terraform.io/terraform-aws-modules/http v2.4.1- Provider version(s):
Reproduction Code [Required]
Code I'm trying to apply :
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.6.1"
cluster_name = local.cluster_name
cluster_version = var.kubernetes_version
subnet_ids = [module.vpc.private_subnets[0], module.vpc.private_subnets[1]]
vpc_id = module.vpc.vpc_id
enable_irsa = "true"
# workers_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore", "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"]
# cluster_endpoint_private_access_cidrs = [var.vpc_cidr]
# cluster_create_endpoint_private_access_sg_rule = true
cloudwatch_log_group_retention_in_days = 0
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
cluster_enabled_log_types = local.enabled_cluster_logs
cluster_encryption_config = [
{
provider_key_arn = aws_kms_key.eks.arn
resources = ["secrets"]
}
]
node_security_group_additional_rules = {
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
self = true
}
egress_all = {
description = "Node all egress"
protocol = "-1"
from_port = 0
to_port = 0
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
cluster_nodes_incoming = {
description = "allow from cluster To node 1025-65535"
protocol = "tcp"
from_port = 1025
to_port = 65535
type = "ingress"
source_cluster_security_group = true
}
}
tags = {
Environment = var.environment
Terraform = "True"
project = "${var.project}-eks-${var.environment}"
}
self_managed_node_group_defaults = {
iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore", "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"]
tags = {
"k8s.io/cluster-autoscaler/enabled" = "True"
"k8s.io/cluster-autoscaler/${local.cluster_name}" = "True"
"Name" = local.cluster_name
}
}
self_managed_node_groups = {
one = {
name = "spot-1"
ami_id = data.aws_ami.bottlerocket_ami.id
max_size = var.eks_max_nodes
min_size = var.eks_min_nodes
instance_type = var.eks_instance_type
desired_size = var.eks_min_nodes
use_mixed_instances_policy = true
mixed_instances_policy = {
instances_distribution = {
on_demand_base_capacity = 0
on_demand_percentage_above_base_capacity = 10
spot_allocation_strategy = "capacity-optimized"
}
}
tags = {
"k8s.io/cluster-autoscaler/enabled" = "True"
"k8s.io/cluster-autoscaler/${local.cluster_name}" = "True"
}
}
}
}
Steps to reproduce the behavior:
Expected behavior
Actual behavior
I know aws-auth and the output of the kubeconfig file changed.
β Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
β
β with module.benga-env.helm_release.nginx-ingress-internal,
β on ../../modules/env/ingress-internal.tf line 5, in resource "helm_release" "nginx-ingress-internal":
β 5: resource "helm_release" "nginx-ingress-internal" {
β
Terminal Output Screenshot(s)
Additional context
My old code:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "17.24.0"
cluster_name = local.cluster_name
cluster_version = var.kubernetes_version
subnets = [module.vpc.private_subnets[0], module.vpc.private_subnets[1]]
vpc_id = module.vpc.vpc_id
enable_irsa = "true"
workers_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore", "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"]
cluster_endpoint_private_access_cidrs = [var.vpc_cidr]
cluster_create_endpoint_private_access_sg_rule = true
cluster_log_retention_in_days = 0
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
cluster_enabled_log_types = local.enabled_cluster_logs
cluster_encryption_config = [
{
provider_key_arn = aws_kms_key.eks.arn
resources = ["secrets"]
}
]
tags = {
Environment = var.environment
Terraform = "True"
project = "${var.project}-eks-${var.environment}"
}
workers_group_defaults = {
metadata_http_tokens = "required"
root_volume_type = "gp3"
root_volume_size = 100
subnets = [module.vpc.private_subnets[0], module.vpc.private_subnets[1]]
#additional_userdata = "curl -o - https://inspector-agent.amazonaws.com/linux/latest/install | bash - ; yum makecache ; yum -y update"
tags = [
{
"key" = "k8s.io/cluster-autoscaler/enabled"
"value" = "True"
"propagate_at_launch" = "true"
},
{
"key" = "k8s.io/cluster-autoscaler/${local.cluster_name}"
"value" = "True"
"propagate_at_launch" = "true"
}
]
}
worker_groups = [
{
ami_id = data.aws_ami.bottlerocket_ami.id
instance_type = var.eks_instance_type
asg_desired_capacity = var.eks_min_nodes
asg_min_size = var.eks_min_nodes
asg_max_size = var.eks_max_nodes
userdata_template_extra_args = {
enable_admin_container = false
enable_control_container = true
aws_region = data.aws_region.current.name
}
userdata_template_file = "${path.module}/userdata.toml"
}
]
map_users = var.map_users
map_roles = var.map_roles
}
I know that the code I'm tryin to apply is not exacly the same configuration. But I want to be able to perform terraform plan command and fix the diffs.