terraform-provider-google
terraform-provider-google copied to clipboard
Cloud Run: cannot reconcile service edited through console
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
- Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
- If you are interested in working on this issue or have submitted a pull request, please leave a comment.
- If an issue is assigned to the
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.
Terraform Version
terraform -v
Terraform v1.2.3
on darwin_arm64
+ provider registry.terraform.io/chainguard-dev/ko v0.0.4
+ provider registry.terraform.io/hashicorp/google v4.47.0
+ provider registry.terraform.io/hashicorp/google-beta v4.47.0
...
Affected Resource(s)
- google_cloud_run_service
Terraform Configuration Files
This should affect virtually any Cloud Run service deployed through terraform.
Debug Output
N/A
Panic Output
N/A
Expected Behavior
terraform reconciles the service
Actual Behavior
After ~20 minutes it times out and prints an error with a 409 because the named revision already exists.
Steps to Reproduce
- Deploy a service via terraform,
- Edit it via the Console's editor (not yaml),
- Deploy the service again via terraform.
Important Factoids
The Knative resource model used by Cloud Run supports "bring your own revision name" where you can use spec.template.metadata.name
to name the revision that the Service will create. This is used by the Cloud Run console when edits are made.
If changes are made to the service without removing or updating this name, then things will fail to deploy.
cc @steren
References
b/272367711
@mattmoor when you execute step 3, do you mean you want to re-deploy the terraform config which is the same as step 1? Can you share your config and the debug log?
@edwardmedia it doesn't matter, it could be asking terraform to reconcile things back to how they were, or deploying something new.
The edit we made for 2.
was to add a trivial env var to trigger a rollout, e.g. FOO=bar
env var.
You can repro it with the examples in https://github.com/chainguard-dev/terraform-google-prober
I'd recommend the basic one as the complex one spins up GCLB, which is pricy.
https://github.com/chainguard-dev/terraform-google-prober/tree/main/examples/basic
here are some more details:
- Step1: I deployed a service using Terraform using these steps
- Step2: I used the UI to add an anv var: https://cloud.google.com/run/docs/configuring/environment-variables#console
The YAML of my service is now:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: tf-test
namespace: '607903476290'
selfLink: /apis/serving.knative.dev/v1/namespaces/607903476290/services/tf-test
uid: 5493a021-319a-446e-92eb-99e7bfd39d48
resourceVersion: AAXxsHOMS4Y
generation: 2
creationTimestamp: '2023-01-07T18:09:16.917847Z'
labels:
cloud.googleapis.com/location: us-central1
annotations:
run.googleapis.com/client-name: cloud-console
serving.knative.dev/creator: [email protected]
serving.knative.dev/lastModifier: [email protected]
client.knative.dev/user-image: us-docker.pkg.dev/cloudrun/container/hello
run.googleapis.com/ingress: all
run.googleapis.com/ingress-status: all
spec:
template:
metadata:
name: tf-test-00002-juj
annotations:
run.googleapis.com/client-name: cloud-console
autoscaling.knative.dev/maxScale: '100'
spec:
containerConcurrency: 80
timeoutSeconds: 300
serviceAccountName: [email protected]
containers:
- image: us-docker.pkg.dev/cloudrun/container/hello
ports:
- name: http1
containerPort: 8080
env:
- name: FOO
value: Bar
resources:
limits:
cpu: 1000m
memory: 512Mi
traffic:
- percent: 100
latestRevision: true
Note the spec.template.metadata.name
attribute set.
I run terraform plan
note that it *does not call out that spec.template.metadata.name
will be reset to null
terraform plan
data.google_iam_policy.noauth: Reading...
data.google_iam_policy.noauth: Read complete after 0s [id=3450855414]
google_cloud_run_service.default: Refreshing state... [id=locations/us-central1/namespaces/steren-playground/services/tf-test]
google_cloud_run_service_iam_policy.noauth: Refreshing state... [id=v1/projects/steren-playground/locations/us-central1/services/tf-test]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_cloud_run_service.default will be updated in-place
~ resource "google_cloud_run_service" "default" {
id = "locations/us-central1/namespaces/steren-playground/services/tf-test"
name = "tf-test"
# (4 unchanged attributes hidden)
~ metadata {
~ annotations = {
- "client.knative.dev/user-image" = "us-docker.pkg.dev/cloudrun/container/hello" -> null
~ "run.googleapis.com/client-name" = "cloud-console" -> "terraform"
- "run.googleapis.com/ingress" = "all" -> null
# (3 unchanged elements hidden)
}
# (6 unchanged attributes hidden)
}
~ template {
~ spec {
# (3 unchanged attributes hidden)
~ containers {
# (3 unchanged attributes hidden)
- env {
- name = "FOO" -> null
- value = "Bar" -> null
}
# (2 unchanged blocks hidden)
}
}
# (1 unchanged block hidden)
}
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
I run terraform apply
and it indeed seems to hang.
2 things:
- Is
spec.template.metadata.name
treated specially by Terraform? I would expect it to be reset to null in theplan
andapply
- Why does it hang?
This problem also happens if you provision a cloud run service using terraform google_cloud_run_v2_service
and then deploy revisions in CI/CD. Either Terraform will always detect and apply "changes" because the revision has changed or if you add the revision to lifecycle ignore_changes then you get the error Revision named ... with different configuration already exists
.
I had this same issue - I deployed a cloudrun service via Terraform (terraform cloud) and then subsequently deployed new revisions with updated image tags via this GitHub actions workflow.
I was getting the same error:
Error 409: Revision named 'service-name-00048-fts' with different configuration already exists.
I was able to circumvent this error with 2 code changes. The first - adding the random_uuid
terraform generator, and providing a unique revision name to the terraform resource.
resource "random_uuid" "cloudrun_revision_id" {
keepers = {
first = timestamp()
}
}
resource "google_cloud_run_v2_service" "service" {
name = var.cloudrun_name
template {
revision = "${var.cloudrun_name}-${random_uuid.cloudrun_revision_id.result}"
}
}
Secondly, ignoring the following lifecycle changes:
lifecycle {
ignore_changes = [
annotations,
client_version,
client,
labels,
template.0.annotations,
template.0.labels,
]
}
Not ideal, but a fairly simple workaround that allowed me manage the service from two angles. I hope this helps!
@jamiezieziula the problem with that solution though is that every time you've deployed a new revision without terraform then the next terraform run will create a new revision even if there are no changes.
Any update on a fix for this that doesn't involve deploying a new revision on every TF apply?
While I agree this should be fixed, I am also wondering if the same issue occurs in the v2 resources.
I recommend switching to v2 as a workaround.
I recommend switching to v2 as a workaround.
It does reproduce with v2, see https://github.com/hashicorp/terraform-provider-google/issues/13410#issuecomment-1404610413
I'm not sure everyone on that thread is talking about the same things.
Let me recap:
- Any update to
spec.template
will create a new Revision. This is how Cloud Run works. - If
spec.template.metadata.name
is set, and a revision already exists with this name, Cloud Run will reject the update. This is how Cloud Run works. - The issue reported by @mattmoor is when using Terraform, then using the UI to make a change. The UI will set
spec.template.metadata.name
. What is unclear is why this name isn't just reconciled by Terraform.
If you have an issue with 1. or 2., unfortunately, these are "working as intended". Please confirm that you are the conditions 3.
- Any update to
spec.template
will create a new Revision. This is how Cloud Run works.
This is expected and desired. You want changes to the terraform config to update your Cloud Run service.
- If
spec.template.metadata.name
is set, and a revision already exists with this name, Cloud Run will reject the update. This is how Cloud Run works.
I do not set this property in my terraform config or anywhere else manually. I expect terraform, the UI, gcloud CLI, REST API, etc. to generate a new unique revision name when needed. This generally works, except, see below.
- The issue reported by @mattmoor is when using Terraform, then using the UI to make a change. The UI will set
spec.template.metadata.name
. What is unclear is why this name isn't just reconciled by Terraform.
AFAIK it boils down to what you set ignore_lifecycle_changes
to. Usually after realizing the UI/API deployments cause terraform to want to create a new revision people set it to something like this:
lifecycle {
ignore_changes = [template[0].revision, labels, annotations, template[0].annotations, template[0].containers[0].image, client, client_version, template[0].labels]
}
However, ignoring template[0].revision
seems to stop Terraform from generating a new revision name when it actually should deploy a new revision and then it fails to deploy with the error googleapi: Error 409: Requested entity already exists
. If you stop ignoring template[0].revision
then Terraform will detect changes and redeploy unexpectedly if you for example deploy a new image (which is ignored via template[0].containers[0].image
) because the revision has changed.
Hey folks, there may be a few things getting tangled up here but want to share the results of some testing that may help.
@mattmoor's original issue (Deploy a CR service via v1 Terraform -> make a change in another client that deploys a new revision -> deploy again via TF and fail) will always be a problem with the v1 Terraform resource. The v2 TF resource should work correctly in that circumstance, provided that you did not provide a revision name in the initial V2 Terraform deployment. No ignore_changes should be required.
@FabianFrank , can you share more details of your repro using the v2 resource, per https://github.com/hashicorp/terraform-provider-google/issues/13410#issuecomment-1404610413? Assuming you did not provide a revision name in the initial terraform deployment, I cannot reproduce this behavior.
Hi @justinmahood, I can't speak for @FabianFrank but it seems like I'm experiencing the same issue as him using google_cloud_run_v2_service
.
My resource definition does not include a revision
field, even for the initial creation. my ignore_changes
is set to ignore template[0].containers[0].image
because that is the field I want to update outside of the context of terraform (via gcloud run deploy <service> --image <image>
)
There are two issues I'm running into, neither of which result in the behavior I would ideally want:
with the revision field left out, and also not included in ignore_changes:
- initial deployment works. subsequent changes on the TF side create revisions as expected
-
gcloud run deploy
successfully deploys a new revision - now a tf plan sees that the revision is different, and will deploy a new revision just to remove the "revision" field, even if no other changes were made to the service definition
~ template {
- revision = "myservice-00004-bub" -> null
so, it seems like the solution to that would be to add this revision field to ignore_changes, and this does solve the problem of creating unnecessary revisions, however if I have to actually make a change to the TF definition after my gcloud run deploy
(for example changing max_instance_count
from 10
to 20
) that is when I see this other error:
Error: Error updating Service "projects/.../services/myservice": googleapi: Error 409: Requested entity already exists
even though the plan
action seems to make the correct plan.
so... the repro steps for my case would be:
- deploy service using google_cloud_run_v2_service.
revision
should not be present, andtemplate[0].containers[0].image
as well astemplate[0].revision
should be in the lifecycle ignore_changes block - deploy a new revision of the service using something like
gcloud run deploy <service> --image <myimage>
- modify a value in the TF definition. plan -> successfully shows
1 to change
, apply results in above mentioned409
When i do a terraform state show google_cloud_run_v2_service.myservice
, i can see that there is a revision in there:
revision = "myservice-00004-bub"
and my guess is that a subsequent apply is trying to create another revision with that same name, which explains the 409, but what i want is for it to let google auto-generate a new revision name. is something like that possible?
@trriplejay explained it perfectly, that is what I am experiencing!
I think I know what's happening: When you use gcloud run deploy
or the Cloud Console clients, a ("nice") revision name is set by the client.
Subsequent Terraform updates would need to either remove or update this name, ignoring it means that the same name is used, therefore being rejected.
Cloud Run team could evaluate updating the behavior of these clients, so that they do not set a generated "nice" names, but leave the name field empty. This was originally done because the server-side generated revision names are a bit ugly (no generation number and a large set of random letters), these server-side generated names of Cloud Run were put in place to be consistent with Knative. We'll follow up.
interesting, thanks for the explanation @steren! do you think there is some workaround? maybe directly using the api to create a revision without giving a name?
I think the correct behavior would be to ignore changes in the revision that occur outside terraform, but still generate a new revision when a change needs to be applied. Sort of like a one way ignore_changes
.
Hello All, I too have been impacted by this issue as accurately described by others. Since this issue made it clear that the issue had to do with a revision name that was not being regenerated, I focused my efforts there.
I spotted the autogenerate_revision_name
flag in the example here:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/cloud_run_service#example-usage---cloud-run-service-sql
Which is documented here: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/cloud_run_service#autogenerate_revision_name
What this documentation does not say is that the default is false
! I found that detail in the source:
https://github.com/hashicorp/terraform-provider-google/blob/a56669d837a2ef157a470ed1c4c13cc52526c9ad/google/resource_cloud_run_service.go#L783
So, I added autogenerate_revision_name = true
to my google_cloud_run_service
resource and I was able to get a successful deployment, a new revision was created. Prior to this, I was seeing Error 409: Requested entity already exists
.
Before I get too excited, I was hoping that one of you folks could confirm my findings. Thank you.
[UPDATE] I went ahead and made another Cloud Run revision using the Google Console. I was then able to deploy another revision via terraform without difficulty.
@mattcollier that sounds perfect, however I notice that you're using the v1 resource google_cloud_run_service
rather than google_cloud_run_v2_service
. It looks like v2 does not support this flag. I might switch to v1 if it resolves this issue though. thanks for finding that!
i wonder if there's a reason this was left out of v2?
As I described above, the root cause comes from the 2018-ish design choice of having gcloud and Cloud Console name revisions by default because automatic names weren't so nice. We should change that. Cloud Run API should just generate good names automatically and clients should not implicitly name revisions if they don't want to.
autogenerate_revision_name = true
in google_cloud_run_service
was probably added to address this problem or to mimic the behavior of other clients. But it is a logic built into the Terraform resource, it basically "patches" the root cause.
Others on the team could chime in, but I don't think we want this feature added to google_cloud_run_v2_service
, the idea with google_cloud_run_v2_service
is that is exactly maps to the Cloud Run Admin API v2 resources. This enables the Cloud Run team to guarantee that any Cloud Run feature added to the Admin API v2 automatically appears in google_cloud_run_v2_service
. Therefore, we want to avoid any hand-crafting f the behavior. I am not even sure if the infrastructure used allows it.
It's great that autogenerate_revision_name = true
exists in google_cloud_run_service
, but as I said, we want to fix that at the root, in the Cloud Run API and CLI/UI clients
Hi!
@trriplejay - We have very similar issue with resource "google_cloud_run_v2_service"
.
Background "terraform" is used to keep cloud infrastructure configuration up to date. "gcloud run deploy" is used in pipelines to deploy newer versions.
"terraform" is executed ~once per day to validate/update infrastructure configuration. "gcloud run deploy" could be executed multiple times per day.
Scenario 1.
If there have been "gcloud run deploy" then terrform identifies that template.revision
is changed and terraform initiates new deploy (unnecessary deploy because there is no real configuration update/drift to fix).
It makes one unnecessary/redundant deploy per day per cloudrun service. It makes tens or hundreds unnecessary/redundant deploys per day.
Scenario 2.
If there have been "gcloud run deploy" and there is setting lifecycle { ignore_changes = [ template[0].revision
then next terraform execution which discovers a real configuration drift (example: template { scaling { max_instance_count = 6 -> 5
)
reports error:
Error: Error updating Service "projects//locations//services/***": googleapi: Error 409: Requested entity already exists
.
.
Any workarounds? Any plans to fix it? By fixing I mean that either 1) "scenario 1" should not cause unnecessary deploy OR 2) "scenario 2" should perform successful deploy
Regards Marek Läll
Hi @justinmahood, I can't speak for @FabianFrank but it seems like I'm experiencing the same issue as him using
google_cloud_run_v2_service
.My resource definition does not include a
revision
field, even for the initial creation. myignore_changes
is set to ignoretemplate[0].containers[0].image
because that is the field I want to update outside of the context of terraform (viagcloud run deploy <service> --image <image>
)There are two issues I'm running into, neither of which result in the behavior I would ideally want:
with the revision field left out, and also not included in ignore_changes:
- initial deployment works. subsequent changes on the TF side create revisions as expected
gcloud run deploy
successfully deploys a new revision- now a tf plan sees that the revision is different, and will deploy a new revision just to remove the "revision" field, even if no other changes were made to the service definition
~ template { - revision = "myservice-00004-bub" -> null
so, it seems like the solution to that would be to add this revision field to ignore_changes, and this does solve the problem of creating unnecessary revisions, however if I have to actually make a change to the TF definition after my
gcloud run deploy
(for example changingmax_instance_count
from10
to20
) that is when I see this other error:Error: Error updating Service "projects/.../services/myservice": googleapi: Error 409: Requested entity already exists
even though the
plan
action seems to make the correct plan.so... the repro steps for my case would be:
- deploy service using google_cloud_run_v2_service.
revision
should not be present, andtemplate[0].containers[0].image
as well astemplate[0].revision
should be in the lifecycle ignore_changes block- deploy a new revision of the service using something like
gcloud run deploy <service> --image <myimage>
- modify a value in the TF definition. plan -> successfully shows
1 to change
, apply results in above mentioned409
When i do a
terraform state show google_cloud_run_v2_service.myservice
, i can see that there is a revision in there:revision = "myservice-00004-bub"
and my guess is that a subsequent apply is trying to create another revision with that same name, which explains the 409, but what i want is for it to let google auto-generate a new revision name. is something like that possible?
hi @MarekUniq , yeah that's almost exactly what I'm trying to do. For now I'm just going to live with the extra deployments.
It sounds like @steren wants to fix your scenario 1 by updating the gcloud client so that it will stop sending its friendly revision name and then it should play nicely with terraform.
hi @MarekUniq , yeah that's almost exactly what I'm trying to do. For now I'm just going to live with the extra deployments.
It sounds like @steren wants to fix your scenario 1 by updating the gcloud client so that it will stop sending its friendly revision name and then it should play nicely with terraform.
Hi!
@trriplejay @steren - Just to point out that it may happen that fixing "gcloud client" is not enough.
While creating a repeatable test case, I used the "Edit & Deploy New Revision" button in the Google Cloud GUI CloudRun and the result is the same.
The same means that terraform will detect that template.revision
is changed and trigger an additional deployment.
Therefore, I think the Google Cloud GUI CloudRun button "Edit & Deploy New Revision" should also be fixed.
Regards Marek Läll
Yes, please see my comment
Subsequent Terraform updates would need to either remove or update this name, ignoring it means that the same name is used, therefore being rejected.
There are two ways to interpret ignoring (ignore_changes
):
- ignore while detecting changes but use it in case there is going to be new deploy (use terraform value)
- ignore while detecting changes and also ignore in case there is going to be new deploy (use current deploy value)
lifecycle { ignore_changes = [ template[0].revision
- should follow case 1.
lifecycle { ignore_changes = [ template[0].containers[0].image
- should follow case 2.
That would be my expectation.
I understand that Terraform always uses option 2.
Adding support for option 1 would also solve this problem. (Additional keyword ignore_changes_only_for_compare
or similar keyword would help)
Hey folks, just wanted to update the thread on what we're doing on the Cloud Run team to address this issue. As @steren mentioned above, this is an issue with our two major clients (gcloud
CLI and the GCP Console UI) setting a 'prettified' revision name in the spec.
After evaluating, we're going to change our clients to leave the revision name empty by default. We're also updating the behavior of the control plane to use the 'pretty' name scheme if a revision name is not specified.
TL;DR - We're updating our clients and control plane, that will address the root cause of this problem. We'll keep this thread posted when there's an update.
TL;DR - We're updating our clients and control plane, that will address the root cause of this problem. We'll keep this thread posted when there's an update.
@justinmahood Very likely this will fix the major part of the issue. Thank you very much for your effort!
There is very similar issue related to 4 other properties. It is minor compared to revision
but still quite unpleasant.
Here is the scenario:
-
terraform apply
(to create resource "google_cloud_run_v2_service") -
gcloud --project "{{project}}" run deploy "{{service-name}}" --image "europe-north1-docker.pkg.dev/{{project}}/{{repository}}/image:develop-1204" --region "europe-north1"
-
terraform apply
The step "3. terraform apply
" identifies the following differences to apply:
# google_cloud_run_v2_service.{{service-name}} will be updated in-place
~ resource "google_cloud_run_v2_service" "{{service-name}}" {
~ annotations = {
- "client.knative.dev/user-image" = "europe-north1-docker.pkg.dev/{{project}}/{{repository}}/image:develop-1204" -> null
}
- client = "gcloud" -> null
- client_version = "424.0.0" -> null
id = "projects/{{project}}/locations/europe-north1/services/{{service-name}}"
name = "{{service-name}}"
# (17 unchanged attributes hidden)
~ template {
~ annotations = {
- "client.knative.dev/user-image" = "europe-north1-docker.pkg.dev/{{project}}/{{repository}}/image:develop-1204" -> null
}
- revision = "{{service-name}}-00002-roy" -> null
# (4 unchanged attributes hidden)
# (3 unchanged blocks hidden)
}
# (1 unchanged block hidden)
}
Those changes are irrelevant because they are just informative, those differences don't change deployment behavior):
- annotations."client.knative.dev/user-image"
- client
- client_version
- template.annotations."client.knative.dev/user-image"
Yes, I can ignore informative properties with clause:
lifecycle {
ignore_changes = [
annotations["client.knative.dev/user-image"],
client,
client_version,
template[0].annotations["client.knative.dev/user-image"],
]
}
And now the "not nice" part. Imagine the scenario goes on with step 4:
-
terraform apply
-
gcloud run deploy
-
terraform apply
-- Change in terraform config. Example:scaling.min_instance_count
is increased -
terraform apply
then the last intentional update was done by terraform but in the informative properties you can still see values:
- client = "gcloud"
- client_version = "424.0.0"
but it is not true. In reality the last deploy was done by:
- client = "terraform"
- client_version = "1.4.4"
Are there any suggestions about how to overcome this little nuance?
Cloud Run team could evaluate updating the behavior of these clients, so that they do not set a generated "nice" names, but leave the name field empty. This was originally done because the server-side generated revision names are a bit ugly (no generation number and a large set of random letters), these server-side generated names of Cloud Run were put in place to be consistent with Knative. We'll follow up.
@steren Do you have any Issue Tracker ID for this fix to Cloud Run that we can monitor? Or should we talk to our Google customer engineer about it?
Trying to migrate a few services to the Cloud Run second generation execution runtime, and thought to use the V2 resources at the same time, and this popped up like an unpleasant blast from the past (we used CR prior to the autogenerate_revision_name
being introduced in the old resources), making it very difficult for us to use. 😄
Hello, any update ? :) Same trouble for me. Deploy CR V2 with Terraform and updating image with gcloud, and not able to apply new conf with terraform after that :(
++