terraform-provider-rundeck
terraform-provider-rundeck copied to clipboard
invalid memory address or nil pointer dereference
Hi,
I have the following issue when running from terraform. The issue happens if you update some resources like keys or project through GUI after creating them in terraform. Once you do that the below error will occur. Expected behaviour would be to state that there have been changes or that the resource no longer exists and needs to be recreated
│ Error: Plugin did not respond │ │ with module.rundeck_lab["testing"].rundeck_project.lab[0], │ on ../../modules/rundeck_lab/rundeck_unix.tf line 1, in resource "rundeck_project" "lab": │ 1: resource "rundeck_project" "lab" { │ │ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).UpgradeResourceState call. The plugin logs may contain more details. ╵ ╷ │ Error: Plugin did not respond │ │ with module.rundeck_lab["testing"].rundeck_public_key.lab_unix[0], │ on ../../modules/rundeck_lab/rundeck_unix.tf line 266, in resource "rundeck_public_key" "lab_unix": │ 266: resource "rundeck_public_key" "lab_unix" { │ │ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details. ╵ ╷ │ Error: Plugin did not respond │ │ with module.rundeck_lab["testing"].rundeck_acl_policy.lab[0], │ on ../../modules/rundeck_lab/rundeck_unix.tf line 278, in resource "rundeck_acl_policy" "lab": │ 278: resource "rundeck_acl_policy" "lab" { │ │ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details. ╵ ╷ │ Error: Plugin did not respond │ │ with module.rundeck_lab["testing"].rundeck_private_key.windows_lab[0], │ on ../../modules/rundeck_lab/rundeck_windows.tf line 127, in resource "rundeck_private_key" "windows_lab": │ 127: resource "rundeck_private_key" "windows_lab" { │ │ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details. ╵ ╷ │ Error: Plugin did not respond │ │ with module.rundeck_lab["testing"].rundeck_acl_policy.windows_lab[0], │ on ../../modules/rundeck_lab/rundeck_windows.tf line 133, in resource "rundeck_acl_policy" "windows_lab": │ 133: resource "rundeck_acl_policy" "windows_lab" { │ │ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details. ╵
Stack trace from the terraform-provider-rundeck_v0.4.6 plugin:
panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xc257f2]
goroutine 73 [running]: github.com/terraform-providers/terraform-provider-rundeck/rundeck.PrivateKeyExists(0xc00052fc20?, {0xde1280?, 0xc000435440}) github.com/terraform-providers/terraform-provider-rundeck/rundeck/resource_private_key.go:122 +0x1f2 github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).RefreshWithoutUpgrade(0xc00053a600, 0xc00021c190, {0xde1280, 0xc000435440}) github.com/hashicorp/[email protected]/helper/schema/resource.go:440 +0x131 github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ReadResource(0xc000012a98, {0xc00021c0a0?, 0x4b9f06?}, 0xc00021c0a0) github.com/hashicorp/[email protected]/internal/helper/plugin/grpc_provider.go:525 +0x365 github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ReadResource_Handler({0xdb8480?, 0xc000012a98}, {0xf73790, 0xc000270180}, 0xc0005ae600, 0x0) github.com/hashicorp/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3153 +0x170 google.golang.org/grpc.(*Server).processUnaryRPC(0xc00042c480, {0xf77880, 0xc000172000}, 0xc00001e000, 0xc000203650, 0x14c1e90, 0x0) google.golang.org/[email protected]/server.go:995 +0xe1e google.golang.org/grpc.(*Server).handleStream(0xc00042c480, {0xf77880, 0xc000172000}, 0xc00001e000, 0x0) google.golang.org/[email protected]/server.go:1275 +0xa16 google.golang.org/grpc.(*Server).serveStreams.func1.1() google.golang.org/[email protected]/server.go:710 +0x98 created by google.golang.org/grpc.(*Server).serveStreams.func1 google.golang.org/[email protected]/server.go:708 +0xea
Error: The terraform-provider-rundeck_v0.4.6 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely helpful if you could report the crash with the plugin's maintainers so that it can be fixed. The output above should help diagnose the issue.
Hi @DragosV29 can you please provide more detailed steps to reproduce this?
Hello,
Create a rundeck project through terraform code
resource "rundeck_project" "lab" {
count = var.disable_deletion == 1 ? 1 : 0
default_node_executor_plugin = "sshj-ssh"
default_node_file_copier_plugin = "sshj-scp"
description = "Lab environment for ${var.subproject} unix"
extra_config = {
#"project.label" = local.rundeck_project_name
"project/label" = local.rundeck_unix_project_name
"project/ansible-generate-inventory" = "true"
#"project/ansible-ssh-passphrase-option" = "option.password"
"project/disable/executions" = "false"
"project/disable/schedule" = "false"
"project/execution/history/cleanup/batch" = "500"
"project/execution/history/cleanup/enabled" = "false"
"project/execution/history/cleanup/retention/days" = "60"
"project/execution/history/cleanup/retention/minimum" = "50"
"project/execution/history/cleanup/schedule" = "0 0 0 1/1 * ? *"
"project/file-copy-destination-dir" = "/home/ec2-user"
"project/healthcheck/cache/refresh" = "true"
"project/healthcheck/enabled" = "true"
"project/healthcheck/onstartup" = "true"
"project/jobs/gui/groupExpandLevel" = "1"
"project/label" = ""
"project/later/executions/disable" = "false"
"project/later/executions/enable" = "false"
"project/later/schedule/disable" = "false"
"project/later/schedule/enable" = "false"
#"project/nodeCache/delay" = ""
"project/nodeCache/enabled" = "true"
"project/nodeCache/firstLoadSynch" = "true"
"project/output/allowUnsanitized" = "false"
"project/retry-counter" = "3"
"project/ssh-command-timeout" = "0"
"project/ssh-connect-timeout" = "0"
"provisioningDetails" = "Provisioned through Terraform rundeck-${var.env} project"
name = local.rundeck_unix_project_name
ssh_key_storage_path = var.no_of_unix_instances != 0 ? "keys/${rundeck_private_key.lab_unix[0].path}" : null
resource_model_source {
config = {
"endpoint" = "https://${data.aws_region.current[count.index].endpoint}"
"filter" = join(";", [for k,v in local.lab_unix_tags: "tag:${k}=${v}"])
"httpProxyPort" = "80"
"pageResults" = "100"
"refreshInterval" = "30"
"region" = data.aws_region.current[count.index].name
"runningOnly" = "true"
"synchronousLoad" = "true"
"useDefaultMapping" = "true"
}
type = "aws-ec2"
}
}
I've also created the machine as part of it but that is not important.
Create the rundeck keys through terraform:
resource "rundeck_public_key" "lab_unix" { count = var.no_of_unix_instances != 0 && var.disable_deletion == 1 ? 1 : 0 path = "project/${local.rundeck_unix_project_name}/${var.subproject}_key.pub" key_material = tls_private_key.lab_key[count.index].public_key_openssh }
resource "rundeck_private_key" "lab_unix" { count = var.no_of_unix_instances != 0 && var.disable_deletion == 1 ? 1 : 0 path = "project/${local.rundeck_unix_project_name}/${var.subproject}_key" key_material = tls_private_key.lab_key[count.index].private_key_openssh }
the key takes as refference a tls_private_key resource
resource "tls_private_key" "lab_key" { count = var.no_of_unix_instances != 0 && var.disable_deletion == 1 ? 1 : 0 algorithm = "RSA" rsa_bits = 4096 }
Additionaly you can have a basic job but I do not think that would make any difference
resource "rundeck_job" "basic_lab" { count = var.no_of_unix_instances != 0 && var.disable_deletion == 1 ? 1 : 0 name = "Run basic command on server(s)" project_name = rundeck_project.lab[count.index].name node_filter_query = "tags:running" description = "Unix AdHoc Commands Job"
schedule = "0 00 10 ? * 1-5 *" time_zone = "Europe/Bucharest"
command { description = "Execute AdHoc Commands" shell_command = "echo "Hello from Rundeck!"; whoami;echo "You are connected to:"; hostname" }
}
Once all these have been provisioned go to rundeck gui and remove the key manually then run another plan/apply
Let me know if you are able to reproduce the issue or if any additional code is needed.
I think the expectation is that changes aren't made outside of Terraform. Deleting the key manually creates a situation where the Terraform State is different from the actual state. I'll defer to more experienced Terraform users if that's normal or not, but I do get an error in that scenario.