Terraform-provider-local: grpc: received message larger than max

Created on 13 Jun 2019  Â·  24Comments  Â·  Source: hashicorp/terraform-provider-local

_This issue was originally opened by @tebriel as hashicorp/terraform#21709. It was migrated here as a result of the provider split. The original body of the issue is below._


Terraform Version

Terraform v0.12.2
+ provider.archive v1.2.1
+ provider.aws v2.14.0
+ provider.local v1.2.2
+ provider.template v2.1.1

Terraform Configuration Files

// Nothing exceptionally important at this time

Debug Output


https://gist.github.com/tebriel/08f699ce69555a2670884343f9609feb

Crash Output


No crash

Expected Behavior


It should've completed the plan

Actual Behavior

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (9761610 vs. 4194304)

Steps to Reproduce


terraform plan on my medium sized project.

Additional Context


Running within make, but has same profile outside of make. This applies fine in 0.11.14.

References

enhancement

Most helpful comment

Can this get more attention please?

All 24 comments

After some investigation and discussion in hashicorp/terraform#21709, I moved this here to represent a change to add a file size limit to this provider (smaller than the 4MB limit imposed by Terraform Core so that users will never hit that generic error even when counting protocol overhead) and to document that limit for both the local_file data source and the local_file resource type.

Is this still open? I'd like to pick this up if so.
Could you clarify/confirm the request?

  1. Add file size limit of 4mb in the local provider through a validator
  2. Update docs to reflect the size limit

Hello

Do you plan to fix this problem? If so, when?

Is this still open? I'd like to pick this up if so.
Could you clarify/confirm the request?

1. Add file size limit of 4mb in the local provider through a validator

2. Update docs to reflect the size limit

I think the best fix will be to support files >4Mb

Yes, this problem still persist.

Yes, I ran into this issue today on the local_file data source pointing at a prospective AWS Lambda archive file.

Hello, is there any progress on this issue or was it parked? This can become a bigger issue if we use template file from Kubernetes and must store the file to disk. Since kubernetes Yaml files can become pretty big.
my work around is to split the file in 2. The initial file size was 2Mb, now I have 2 files of a bit less than 1Mb each and it does work.
Thanks

Ran into this by using aws_lambda_function resource...

data "local_file" "lambda" {
  filename = "${path.module}/out.zip"
}

resource "aws_s3_bucket_object" "lambda" {
  bucket = var.lambda_bucket
  key    = "${local.name}.zip"
  source = data.local_file.lambda.filename
  etag = filemd5(data.local_file.lambda.filename)
}

resource "aws_lambda_function" "login_api" {
  function_name    = local.name
  role             = aws_iam_role.lambda_role.arn
  handler          = "lambda.handler"
  s3_bucket        = aws_s3_bucket_object.lambda.bucket
  s3_key           = aws_s3_bucket_object.lambda.key
  source_code_hash = filebase64sha256(data.local_file.lambda.filename)

Is there any agreement on how we can move forward?
Files over 4mb only worked previously due to a lack of safety checks (See https://github.com/hashicorp/terraform/issues/21709#issuecomment-501497885) so the error is valid and it doesn’t sound like changing the limit in terraform core will be an option either (Re: “not a bug, it’s a feature”).

We could possibly handle it locally by splitting files into 4mb chunks within the provider but I’m not sure if that would create it’s own issues. I can pursue that but before I waste time would that even be acceptable @apparentlymart ?

Using Terraform 0.12.23 and aws provider 2.61.0, Getting the same error Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (18182422 vs. 4194304)

It looks as though the core package has been updated to allow 64MB - https://github.com/hashicorp/terraform/pull/20906#

And according to the lambda limits docs 50MB files are able to be uploaded.

Would it not be best to set the saftey check to 50MB?

Just as an FYI for anyone having this issue.

If you put your zip file in a s3 bucket you shouldn't face this problem. But remember to use the aws_s3_bucket_object.lambda_zip.content_base64 rather than the filebase64(path) function, then you won't have this issue (or at least that was the fix for me).

Another option is using an external data source.

for example, given a filename with the variable deployment_package, generate the base64 hash with the following:

data "external" "deployment_package" {
  program = ["/bin/bash", "-c", <<EOS
#!/bin/bash
set -e
SHA=$(openssl dgst -sha256 ${var.deployment_package} | cut -d' ' -f2 | base64)
jq -n --arg sha "$SHA" '{"filebase64sha256": $sha }'
EOS
  ]
}

and use it as such:

source_code_hash = data.external.deployment_package.result.filebase64sha256

which should give you

+ source_code_hash = "ZjRkOTM4MzBlMDk4ODVkNWZmMDIyMTAwMmNkMDhmMTJhYTUxMDUzZmIzOThkMmE4ODQyOTc2MjcwNThmZmE3Nwo="

+1 this issue, it's causing us much pain as we intentionally want to inline larger files into the terraform.

I see that https://github.com/hashicorp/terraform/pull/20906 has been merged over a year ago, but the symptom described above still persists.

Can the limit for grpc transfer be increased all around the project to allow downstream service which can accept such payloads to work properly without workarounds?

Still happening with Terraform 0.12.24. Any workaround to fix the GRPC limit error ?

This is still happening with Terraform 0.13.5, when using body with an API Gateway (v2), using version 3.14.1 of the AWS provider.

To add more clarity, I'm using the file function in my case:

body = file(var.body)

The file in question is on 1.5MB in size.

If I remove the body declaration, Terraform runs successfully.

Update

I have used jq to compress and reduce the size of the body to ~500KB, and there was no error. It looks like the threshold might be lower than 4MB, 1MB, perhaps?

I still have this issue with
Terraform v0.12.29
provider.archive v2.0.0
provider.aws v3.15.0
provider.template v2.2.0

Need filebase64to support file > 4mb because using it in combination with archive_fileis the only way to make it idempotent.
Using a local_file in between brakes that....

data "archive_file" "this" {
  type        = "zip"
  output_path = "${path.module}/test.zip"

  source {
    filename = "test.crt"
    content  = file("${path.module}/archive/test.crt")
  }

  source {
    filename = "binary-file"
    content  = filebase64("${path.module}/archive/binary-file")
  }

  source {
    filename = "config.yml"
    content  = data.template_file.this.rendered
  }
}

I also have this issue trying to deploy a Rust function to IBM Cloud. Similarly to @atamgp, I have a data "archive_file" which fails with

grpc: received message larger than max (11484267 vs. 4194304)

But even if this succeeded (or the .zip file is created manually), the resource "ibm_function_action" would still fail with

grpc: received message larger than max (7074738 vs. 4194304)
Terraform v0.14.3
+ provider registry.terraform.io/hashicorp/archive v2.0.0
+ provider registry.terraform.io/hashicorp/local v2.0.0
+ provider registry.terraform.io/ibm-cloud/ibm v1.12.0

Faced same issue with kubernetes config map

resource "kubernetes_config_map" "nginx" {
  metadata {
    name      = "geoip"
    namespace = "ingress"
  }

  binary_data = {
    "GeoLite2-Country.mmdb" = filebase64("${path.module}/config/GeoLite2-Country.mmdb")
  }
}
Acquiring state lock. This may take a few moments...

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5248767 vs. 4194304)
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.3

I've encountered same issue - it looks like there's limitation on how many characters are in resource code.

Using file uploaded to bucket (without compressing it) fixed my issue - I'm assuming, that what helped is the fact, that .body from s3 is usually a stream, opposing to .rendered (which I was using before), which generates more characters in resource source.

This is still happening with Terraform 0.13.5, when using body with an API Gateway (v2), using version 3.14.1 of the AWS provider.

To add more clarity, I'm using the file function in my case:

body = file(var.body)

The file in question is on 1.5MB in size.

If I remove the body declaration, Terraform runs successfully.

Update

I have used jq to compress and reduce the size of the body to ~500KB, and there was no error. It looks like the threshold might be lower than 4MB, 1MB, perhaps?

@finferflu - have found the same thing, we were running into this with a 1.5mb openapi json file. I was under the impression that it was not the actual file handle on the JSON that was causing this, but the "body" of the REST API now contains this which is then included in the state - and there's probably a lot of escape characters and other items in the state - so the statefile exceeds 4mb. To avoid a local file for the swagger, we uploaded to S3 and used an s3 data object in TF and the same problem occurred - so a strong indicator to support this.

Still getting this issue with v0.15.4 and terraform cloud. We imported some infrastructure while using terraform cloud and then tried a plan, but cannot get the state file out:

â•·
│ Error: Plugin error
│
│ with okta_group.user_type_non_service_accounts,
│ on groups.tf line 174, in resource "okta_group" "user_type_non_service_accounts":
│ 174: resource "okta_group" "user_type_non_service_accounts" {
│
│ The plugin returned an unexpected error from plugin.(*GRPCProvider).UpgradeResourceState: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (6280527 vs. 4194304)

My file is around 2.4 MB and I am facing this issue even today.

resource "local_file" "parse-template" {
  content =  templatefile(local.template-name, {
    var1 = value1
    var2 = value2
  }) 
  filename = "${local.script-name}"
}

any workarounds for this please ?

We ran into this error when using swagger JSON files and API gateway.
We temporarily fixed this issue by compressing the JSON swagger file to shrink the files which was sufficient. swagger size went from 1.4Mb to 950Kb.

It's not a real workaround, but maybe it helps somebody who is also close to the limit.
Strangely, the error kept persisting even though we didn't use any local.template_file or local.file data/resource ( we used the templatefile function instead ).

Can this get more attention please?

Was this page helpful?
0 / 5 - 0 ratings