terraform-provider-local icon indicating copy to clipboard operation
terraform-provider-local copied to clipboard

grpc: received message larger than max

Open ghost opened this issue 6 years ago • 26 comments

This issue was originally opened by @tebriel as hashicorp/terraform#21709. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.12.2
+ provider.archive v1.2.1
+ provider.aws v2.14.0
+ provider.local v1.2.2
+ provider.template v2.1.1

Terraform Configuration Files

// Nothing exceptionally important at this time

Debug Output

https://gist.github.com/tebriel/08f699ce69555a2670884343f9609feb

Crash Output

No crash

Expected Behavior

It should've completed the plan

Actual Behavior

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (9761610 vs. 4194304)

Steps to Reproduce

terraform plan on my medium sized project.

Additional Context

Running within make, but has same profile outside of make. This applies fine in 0.11.14.

References

ghost avatar Jun 13 '19 16:06 ghost

After some investigation and discussion in hashicorp/terraform#21709, I moved this here to represent a change to add a file size limit to this provider (smaller than the 4MB limit imposed by Terraform Core so that users will never hit that generic error even when counting protocol overhead) and to document that limit for both the local_file data source and the local_file resource type.

apparentlymart avatar Jun 13 '19 18:06 apparentlymart

Is this still open? I'd like to pick this up if so. Could you clarify/confirm the request?

  1. Add file size limit of 4mb in the local provider through a validator
  2. Update docs to reflect the size limit

jukie avatar Oct 03 '19 20:10 jukie

Hello

Do you plan to fix this problem? If so, when?

itessential avatar Nov 28 '19 15:11 itessential

Is this still open? I'd like to pick this up if so. Could you clarify/confirm the request?

1. Add file size limit of 4mb in the local provider through a validator

2. Update docs to reflect the size limit

I think the best fix will be to support files >4Mb

mikea avatar Dec 20 '19 18:12 mikea

Yes, this problem still persist.

itessential avatar Dec 20 '19 19:12 itessential

Yes, I ran into this issue today on the local_file data source pointing at a prospective AWS Lambda archive file.

Prototyped avatar Dec 27 '19 16:12 Prototyped

Hello, is there any progress on this issue or was it parked? This can become a bigger issue if we use template file from Kubernetes and must store the file to disk. Since kubernetes Yaml files can become pretty big. my work around is to split the file in 2. The initial file size was 2Mb, now I have 2 files of a bit less than 1Mb each and it does work. Thanks

fsantos2019 avatar Feb 24 '20 15:02 fsantos2019

Ran into this by using aws_lambda_function resource...


data "local_file" "lambda" {
  filename = "${path.module}/out.zip"
}

resource "aws_s3_bucket_object" "lambda" {
  bucket = var.lambda_bucket
  key    = "${local.name}.zip"
  source = data.local_file.lambda.filename
  etag = filemd5(data.local_file.lambda.filename)
}

resource "aws_lambda_function" "login_api" {
  function_name    = local.name
  role             = aws_iam_role.lambda_role.arn
  handler          = "lambda.handler"
  s3_bucket        = aws_s3_bucket_object.lambda.bucket
  s3_key           = aws_s3_bucket_object.lambda.key
  source_code_hash = filebase64sha256(data.local_file.lambda.filename)

chexov avatar Apr 16 '20 22:04 chexov

Is there any agreement on how we can move forward? Files over 4mb only worked previously due to a lack of safety checks (See https://github.com/hashicorp/terraform/issues/21709#issuecomment-501497885) so the error is valid and it doesn’t sound like changing the limit in terraform core will be an option either (Re: “not a bug, it’s a feature”).

We could possibly handle it locally by splitting files into 4mb chunks within the provider but I’m not sure if that would create it’s own issues. I can pursue that but before I waste time would that even be acceptable @apparentlymart ?

jukie avatar May 02 '20 18:05 jukie

Using Terraform 0.12.23 and aws provider 2.61.0, Getting the same error Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (18182422 vs. 4194304)

It looks as though the core package has been updated to allow 64MB - https://github.com/hashicorp/terraform/pull/20906#

And according to the lambda limits docs 50MB files are able to be uploaded.

Would it not be best to set the saftey check to 50MB?

AdamWorley avatar May 14 '20 14:05 AdamWorley

Just as an FYI for anyone having this issue.

If you put your zip file in a s3 bucket you shouldn't face this problem. But remember to use the aws_s3_bucket_object.lambda_zip.content_base64 rather than the filebase64(path) function, then you won't have this issue (or at least that was the fix for me).

maxcbc avatar Jun 29 '20 18:06 maxcbc

Another option is using an external data source.

for example, given a filename with the variable deployment_package, generate the base64 hash with the following:

data "external" "deployment_package" {
  program = ["/bin/bash", "-c", <<EOS
#!/bin/bash
set -e
SHA=$(openssl dgst -sha256 ${var.deployment_package} | cut -d' ' -f2 | base64)
jq -n --arg sha "$SHA" '{"filebase64sha256": $sha }'
EOS
  ]
}

and use it as such:

source_code_hash = data.external.deployment_package.result.filebase64sha256

which should give you

+ source_code_hash = "ZjRkOTM4MzBlMDk4ODVkNWZmMDIyMTAwMmNkMDhmMTJhYTUxMDUzZmIzOThkMmE4ODQyOTc2MjcwNThmZmE3Nwo="

cmaurer avatar Jul 15 '20 20:07 cmaurer

+1 this issue, it's causing us much pain as we intentionally want to inline larger files into the terraform.

I see that https://github.com/hashicorp/terraform/pull/20906 has been merged over a year ago, but the symptom described above still persists.

Can the limit for grpc transfer be increased all around the project to allow downstream service which can accept such payloads to work properly without workarounds?

realn0whereman avatar Aug 05 '20 16:08 realn0whereman

Still happening with Terraform 0.12.24. Any workaround to fix the GRPC limit error ?

anilkumarnagaraj avatar Sep 02 '20 03:09 anilkumarnagaraj

This is still happening with Terraform 0.13.5, when using body with an API Gateway (v2), using version 3.14.1 of the AWS provider.

To add more clarity, I'm using the file function in my case:

body = file(var.body)

The file in question is on 1.5MB in size.

If I remove the body declaration, Terraform runs successfully.

Update

I have used jq to compress and reduce the size of the body to ~500KB, and there was no error. It looks like the threshold might be lower than 4MB, 1MB, perhaps?

finferflu avatar Nov 11 '20 16:11 finferflu

I still have this issue with Terraform v0.12.29 provider.archive v2.0.0 provider.aws v3.15.0 provider.template v2.2.0

Need filebase64 to support file > 4mb because using it in combination with archive_file is the only way to make it idempotent. Using a local_file in between brakes that....


data "archive_file" "this" {
  type        = "zip"
  output_path = "${path.module}/test.zip"

  source {
    filename = "test.crt"
    content  = file("${path.module}/archive/test.crt")
  }

  source {
    filename = "binary-file"
    content  = filebase64("${path.module}/archive/binary-file")
  }

  source {
    filename = "config.yml"
    content  = data.template_file.this.rendered
  }
}

atamgp avatar Nov 13 '20 09:11 atamgp

I also have this issue trying to deploy a Rust function to IBM Cloud. Similarly to @atamgp, I have a data "archive_file" which fails with

grpc: received message larger than max (11484267 vs. 4194304)

But even if this succeeded (or the .zip file is created manually), the resource "ibm_function_action" would still fail with

grpc: received message larger than max (7074738 vs. 4194304)
Terraform v0.14.3
+ provider registry.terraform.io/hashicorp/archive v2.0.0
+ provider registry.terraform.io/hashicorp/local v2.0.0
+ provider registry.terraform.io/ibm-cloud/ibm v1.12.0

reitermarkus avatar Dec 24 '20 01:12 reitermarkus

Faced same issue with kubernetes config map

resource "kubernetes_config_map" "nginx" {
  metadata {
    name      = "geoip"
    namespace = "ingress"
  }
  
  binary_data = {
    "GeoLite2-Country.mmdb" = filebase64("${path.module}/config/GeoLite2-Country.mmdb")
  }
}
Acquiring state lock. This may take a few moments...

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5248767 vs. 4194304)
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.3

mo4islona avatar Feb 25 '21 21:02 mo4islona

I've encountered same issue - it looks like there's limitation on how many characters are in resource code.

Using file uploaded to bucket (without compressing it) fixed my issue - I'm assuming, that what helped is the fact, that .body from s3 is usually a stream, opposing to .rendered (which I was using before), which generates more characters in resource source.

jankozuchowski avatar Apr 30 '21 06:04 jankozuchowski

This is still happening with Terraform 0.13.5, when using body with an API Gateway (v2), using version 3.14.1 of the AWS provider.

To add more clarity, I'm using the file function in my case:

body = file(var.body)

The file in question is on 1.5MB in size.

If I remove the body declaration, Terraform runs successfully.

Update

I have used jq to compress and reduce the size of the body to ~500KB, and there was no error. It looks like the threshold might be lower than 4MB, 1MB, perhaps?

@finferflu - have found the same thing, we were running into this with a 1.5mb openapi json file. I was under the impression that it was not the actual file handle on the JSON that was causing this, but the "body" of the REST API now contains this which is then included in the state - and there's probably a lot of escape characters and other items in the state - so the statefile exceeds 4mb. To avoid a local file for the swagger, we uploaded to S3 and used an s3 data object in TF and the same problem occurred - so a strong indicator to support this.

brettcave avatar May 07 '21 17:05 brettcave

Still getting this issue with v0.15.4 and terraform cloud. We imported some infrastructure while using terraform cloud and then tried a plan, but cannot get the state file out:

╷ │ Error: Plugin error │ │ with okta_group.user_type_non_service_accounts, │ on groups.tf line 174, in resource "okta_group" "user_type_non_service_accounts": │ 174: resource "okta_group" "user_type_non_service_accounts" { │ │ The plugin returned an unexpected error from plugin.(*GRPCProvider).UpgradeResourceState: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (6280527 vs. 4194304)

kabads avatar Jun 28 '21 08:06 kabads

My file is around 2.4 MB and I am facing this issue even today.

resource "local_file" "parse-template" {
  content =  templatefile(local.template-name, {
    var1 = value1
    var2 = value2
  }) 
  filename = "${local.script-name}"
}

any workarounds for this please ?

VikramVasudevan avatar Jul 23 '21 13:07 VikramVasudevan

We ran into this error when using swagger JSON files and API gateway. We temporarily fixed this issue by compressing the JSON swagger file to shrink the files which was sufficient. swagger size went from 1.4Mb to 950Kb.

It's not a real workaround, but maybe it helps somebody who is also close to the limit. Strangely, the error kept persisting even though we didn't use any local.template_file or local.file data/resource ( we used the templatefile function instead ).

filipvh-sentia avatar Sep 06 '21 10:09 filipvh-sentia

Can this get more attention please?

atamgp avatar Nov 02 '21 06:11 atamgp

could we get the target timeline for these fixes or any challenges at the present architecture?

dduleep avatar Jun 13 '22 13:06 dduleep

Hi folks 👋 This issue, while not mentioned in the CHANGELOG, may have been addressed with some underlying dependency updates that would have been included in the (latest) v2.2.3 release of this provider. In particular, this limit should be closer to 256MB. Does upgrading to this version of the provider help prevent this error?

bflad avatar Jun 14 '22 00:06 bflad

Closing due to lack of response -- if this issue still exists after v2.2.3, please open a new issue and we'll investigate further.

bflad avatar Apr 14 '23 09:04 bflad

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

github-actions[bot] avatar May 23 '24 07:05 github-actions[bot]