compliantkubernetes-kubespray
compliantkubernetes-kubespray copied to clipboard
[2] Inconsistent conditional result types error
What should be investigated.
While upgrading one cluster of version kubespray 2.21
- we found some problem with inconsistencies . Below is the error
module.compute.openstack_networking_floatingip_associate_v2.k8s_master[1]: Refreshing state... [id=baf25378-8047-4273-9334-18f4eaf8d71c]
module.compute.openstack_compute_instance_v2.k8s_master[1]: Refreshing state... [id=e4e3588e-1915-43c6-bfe7-4e0a0e3d756e]
module.compute.openstack_compute_instance_v2.k8s_master[0]: Refreshing state... [id=a6c1133d-6a08-4507-9c14-62dc9f88d4aa]
module.compute.openstack_compute_instance_v2.k8s_master[2]: Refreshing state... [id=4bf27f0b-f56b-4ab3-bfcc-1d3f68d122dd]
╷
│ Warning: Experimental feature "module_variable_optional_attrs" is active
│
│ on versions.tf line 13, in terraform:
│ 13: experiments = [module_variable_optional_attrs]
│
│ Experimental features are subject to breaking changes in future minor or patch releases, based on feedback.
│
│ If you have feedback on the design of this feature, please open a GitHub issue to discuss it.
│
│ (and one more similar warning elsewhere)
╵
╷
│ Error: Inconsistent conditional result types
│
│ on modules/ips/main.tf line 43, in resource "openstack_networking_floatingip_v2" "k8s_nodes":
│ 43: for_each = var.number_of_k8s_nodes == 0 ? { for key, value in var.k8s_nodes : key => value if value.floating_ip } : {}
│ ├────────────────
│ │ var.k8s_nodes is object with 21 attributes
│ │ var.number_of_k8s_nodes is 0
│
│ The true and false result expressions must have consistent types. The 'true' value includes object attribute "elastisys-0", which is absent in the 'false' value.
╵
Terraform found changes for wc-cluster, review the changes.
Continuing here will not apply anything, it will just create a temporary state file.
Continue? [y/N]
To fix the problem we had to add here - https://github.com/elastisys/kubespray/compare/5b3d5b26fbe0f0aad06d16ee6a4488476e2d29da...80e63ed55da04184f1fcb456f875732d9f43fe56#diff-e13952be66e4a9b780c6d5bf82e76f413b148dea98fda505ed19ce21ffbf16c8
variable "k8s_nodes" {
default = {}
type = map(object({
az = string
flavor = string
floating_ip = bool
extra_groups = optional(string)
image_id = optional(string)
root_volume_size_in_gb = optional(number)
volume_type = optional(string)
network_id = optional(string)
server_group = optional(string)
cloudinit = optional(object({
extra_partitions = list(object({
volume_path = string
partition_path = string
partition_start = string
partition_end = string
mount_path = string
}))
}))
}))
}
Investigate if this needs to be included in general in our main branch ?
What artifacts should this produce.
Timebox investigation to 1 day. If you can replicate the issue but can't find the root cause, just implement the suggested fix.