pulumi-aws
pulumi-aws copied to clipboard
[Automation API] Pulumi deployment stuck in updating state due to typo in AWS Lambda configuration
What happened?
I'm encountering an issue where Pulumi deployments get stuck in the "updating" state when creating an AWS Lambda function. It seems a typo in the configuration is causing the problem.
Example
Reproduction Steps:
Use the following Pulumi code to create an AWS Lambda function with a typo in the S3Bucket property:
lambdaFunc, err := lambda.NewFunction(ctx, lambdaName, &lambda.FunctionArgs{
// ... other properties
S3Bucket: pulumi.StringPtr("example_Buckeet"), // Typo here!
S3Key: pulumi.StringPtr("example/v0.0.1.zip"),
// ... other properties
})
Run the Pulumi deployment command.
Observed Behavior:
The deployment gets stuck in the "updating" state indefinitely. Expected Behavior:
The deployment should either succeed or fail with a clear error message indicating the typo in the configuration. Troubleshooting Attempts:
It took a considerable amount of time to debug and identify the typo as the root cause.
Questions:
How can I effectively troubleshoot similar issues in future Pulumi deployments, specifically with AWS resources like Lambda functions? What best practices can be recommended for debugging Pulumi deployments in such scenarios?
Request:
I would appreciate guidance on:
Efficient troubleshooting techniques for these types of deployment issues in Pulumi. Best practices for debugging Pulumi deployments involving AWS resources. Thank you for your assistance!
Output of pulumi about
CLI Version 3.109.0 Go Version go1.22.0 Go Compiler gc
Plugins NAME VERSION aws 6.25.0 go unknown
Host OS ubuntu Version 22.04 Arch x86_64
This project is written in go: executable='/usr/local/go/bin/go' version='go version go1.22.1 linux/amd64'
Backend Name ... URL s3://example User ... Organizations .... Token type personal
Dependencies: NAME VERSION github.com/aws/aws-lambda-go v1.46.0 github.com/aws/aws-sdk-go-v2 v1.25.3 github.com/aws/aws-sdk-go-v2/config v1.27.7 github.com/aws/aws-sdk-go-v2/credentials v1.17.7 github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.34.3 github.com/aws/aws-sdk-go-v2/service/kms v1.29.2 github.com/aws/aws-sdk-go-v2/service/lambda v1.53.2 github.com/aws/aws-sdk-go-v2/service/s3 v1.51.4 github.com/aws/aws-sdk-go-v2/service/s3control v1.44.2 github.com/aws/aws-sdk-go-v2/service/sns v1.29.2 github.com/aws/aws-sdk-go-v2/service/sqs v1.31.2 github.com/aws/aws-sdk-go-v2/service/sts v1.28.4 github.com/pulumi/pulumi-aws/sdk/v6 v6.25.0 github.com/pulumi/pulumi/sdk/v3 v3.109.0 github.com/sirupsen/logrus v1.9.3 gopkg.in/yaml.v3 v3.0.1
Additional context
Im using the automation api with this setup:
func PulumiSetup() { projectName := "testtest" stackName := "test"
cwd, err := os.Getwd()
if err != nil {
fmt.Printf("Failed to get working directory: %v\n", err)
panic(err)
}
opts := []auto.LocalWorkspaceOption{
auto.Project(workspace.Project{
Name: tokens.PackageName(projectName),
Runtime: workspace.NewProjectRuntimeInfo("go", nil),
Main: cwd,
Backend: &workspace.ProjectBackend{
URL: "s3://example",
},
}),
}
stack, err := auto.UpsertStackInlineSource(context.Background(), stackName, projectName, pulumiMain, opts...)
if err != nil {
fmt.Printf("Error in UpsertStackInlineSource: %v\n", err)
panic(err)
}
w := stack.Workspace()
err = w.InstallPlugin(context.Background(), "aws", "v6.25.0")
if err != nil {
fmt.Printf("Failed to install program plugins: %v\n", err)
panic(err)
}
stdoutStreamer := optup.ProgressStreams(os.Stdout)
res, err := stack.Up(context.Background(), stdoutStreamer)
if err != nil {
fmt.Printf("Failed to update stack: %v\n\n", err)
panic(err)
}
fmt.Println(res)
}
and then run: go run main.go
The deployment gets stuck in the "updating" state indefinitely.
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The deployment gets stuck in the "updating" state indefinitely.
How long is indefinitely? These operations should have a timeout to cancel after a while, if you left this for multiple hours and it was still running that's suggestive of a timeout bug in the aws provider.
I'm going to move this to the AWS repo and someone will take a look
@KaizenLol I'm not able to reproduce this issue. I've tried both using the automation api and not and I always get an immediate failure. Here is a full example program that I have tried.
package main
import (
"context"
"fmt"
"os"
"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/iam"
"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/lambda"
"github.com/pulumi/pulumi/sdk/v3/go/auto"
"github.com/pulumi/pulumi/sdk/v3/go/auto/optup"
"github.com/pulumi/pulumi/sdk/v3/go/common/tokens"
"github.com/pulumi/pulumi/sdk/v3/go/common/workspace"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumiMain := func(ctx *pulumi.Context) error {
assumeRolePolicy, _ := iam.GetPolicyDocument(ctx, &iam.GetPolicyDocumentArgs{
Statements: []iam.GetPolicyDocumentStatement{
{
Actions: []string{"sts:AssumeRole"},
Principals: []iam.GetPolicyDocumentStatementPrincipal{{Type: "Service", Identifiers: []string{"lambda.amazonaws.com"}}},
},
},
})
role, _ := iam.NewRole(ctx, "myrole", &iam.RoleArgs{
AssumeRolePolicy: pulumi.String(assumeRolePolicy.Json),
})
lambda.NewFunction(ctx, "mylambda", &lambda.FunctionArgs{
Handler: pulumi.String("index.handler"),
Runtime: pulumi.String("nodejs18.x"),
Role: role.Arn,
S3Bucket: pulumi.String("my-bucket"),
S3Key: pulumi.String("my-key"),
})
return nil
}
projectName := "pulumi-go-app"
stackName := "dev"
cwd, err := os.Getwd()
if err != nil {
fmt.Printf("Failed to get working directory: %v\n", err)
panic(err)
}
opts := []auto.LocalWorkspaceOption{
auto.Project(workspace.Project{
Name: tokens.PackageName(projectName),
Runtime: workspace.NewProjectRuntimeInfo("go", nil),
Main: cwd,
}),
}
stack, err := auto.UpsertStackInlineSource(context.Background(), stackName, projectName, pulumiMain, opts...)
if err != nil {
fmt.Printf("Error in UpsertStackInlineSource: %v\n", err)
panic(err)
}
w := stack.Workspace()
err = w.InstallPlugin(context.Background(), "aws", "v6.25.0")
if err != nil {
fmt.Printf("Failed to install program plugins: %v\n", err)
panic(err)
}
stdoutStreamer := optup.ProgressStreams(os.Stdout)
res, err := stack.Up(context.Background(), stdoutStreamer)
if err != nil {
fmt.Printf("Failed to update stack: %v\n\n", err)
panic(err)
}
fmt.Println(res)
}