[Bug]: Unstable sorting for triggers in pipelines?
What happened?
When applying the same pipeline definition (or with unrelated change), the order of the "triggers" within the pipeline swaps "at random", implying some kind of unstable sort. For example, we have two triggers "nonprod" and "infradev" and the tf-plan shows these being swapped:
~ spec {
# (10 unchanged attributes hidden)
~ trigger {
~ modified_files_glob = "nonprod/accounts/**" -> "infradev/accounts/**"
~ name = "nonprod_config_change" -> "infradev_config_change"
~ variables = {
~ "ENV_NAME" = "nonprod" -> "infradev"
}
# (15 unchanged attributes hidden)
~ runtime_environment {
~ name = "inClusterContext/cf-rt-mainnp-jft00u4fo7t4o205" -> "inClusterContext/cf-rt-nonprod-fnuwz2oh1ubv4mjd"
# (4 unchanged attributes hidden)
}
}
~ trigger {
~ modified_files_glob = "infradev/accounts/**" -> "nonprod/accounts/**"
~ name = "infradev_config_change" -> "nonprod_config_change"
~ variables = {
~ "ENV_NAME" = "infradev" -> "nonprod"
}
# (15 unchanged attributes hidden)
~ runtime_environment {
~ name = "inClusterContext/cf-rt-nonprod-fnuwz2oh1ubv4mjd" -> "inClusterContext/cf-rt-mainnp-jft00u4fo7t4o205"
# (4 unchanged attributes hidden)
}
}
# (2 unchanged blocks hidden)
}
Version
codefresh-io/codefresh v0.11.0
Relevant Terraform Configuration
resource "codefresh_pipeline" "config" {
name = "${codefresh_project.cloudsnooze.name}/config"
tags = ["config"]
is_public = true
spec {
concurrency = 1
runtime_environment {
name = local.runtime_nonprod
memory = "4000Mi"
}
contexts = [local.secrets_context]
# The spec_template block refers to the pipeline workflow YAML used
spec_template {
repo = "${local.ghe_org}/${local.ghe_repo_config}"
path = "./codefresh.yml"
revision = "develop" # branch
context = local.ghe_codefresh_integration_context
}
# Trigger the pipeline when commits to /$envname are made.
# The pipeline itseld will only notify+deploy when branch=develop
# We create one trigger per environment and set variables based on this.
trigger {
type = "git"
provider = "github"
context = local.ghe_codefresh_integration_context
repo = "${local.ghe_org}/${local.ghe_repo_config}"
# branch_regex = "/^((develop)$).*/gi"
# branch_regex_input = "multiselect"
events = [
"push.heads"
]
modified_files_glob = "infradev/accounts/**"
name = "infradev_config_change"
disabled = false
variables = {
ENV_NAME = "infradev"
}
runtime_environment {
name = local.runtime_infradev
memory = "2000Mi"
}
}
trigger {
type = "git"
provider = "github"
context = local.ghe_codefresh_integration_context
repo = "${local.ghe_org}/${local.ghe_repo_config}"
# branch_regex = "/^((develop)$).*/gi"
# branch_regex_input = "multiselect"
events = [
"push.heads"
]
modified_files_glob = "nonprod/accounts/**"
name = "nonprod_config_change"
disabled = false
variables = {
ENV_NAME = "nonprod"
}
runtime_environment {
name = local.runtime_nonprod
memory = "2000Mi"
}
}
# Pipeline variables to include that can be referenced in the pipeline workflow YAML
variables = {
NOTIFICATION_EMAIL = "[email protected]"
}
}
}
The schema for Pipeline uses schema.TypeList for the triggers (which should be order-preserving), and all triggers in the tfstate file appear to all match the order in the tf file.... digging
From what I can tell, everything is preserving the order of items. But if I store the plan in a file, it's definitely in the wrong/new order (although .spec[0].trigger is a list).
terraform plan -refresh -out=tfplan
terraform show -json tfplan
OK, I give up - will defer to someone who actually knows this stuff.
@lyricnz thanks for opening the issue. The only way I was able to reproduce the issue is by changing the order or triggers. When I change the order of triggers after the have been applied already, diff shows. Applying again finishes successfully - but doesn't really change anything on API side, then the next plan will also show diff until I changed the order back. So for now to avoid it you can just change the trigger order in your tf file. I will investigate why changing the order works in TF but doesn't do anything on API.