Terraform v0.11.7
+ provider.aws v1.18.0
Creating a lambda function with a 16mb file on a slow connection should work, albeit slowly.
It quickly times out.
module.lambda.aws_lambda_function.lambda: Creating...
arn: "" => "<computed>"
description: "" => "..."
environment.#: "" => "1"
environment.0.variables.%: "" => "2"
environment.0.variables.DOMAINS: "" => "..."
environment.0.variables.EMAIL_ADDRESS: "" => "..."
filename: "" => "large-file.zip"
function_name: "" => "..."
handler: "" => "main.lambda_handler"
invoke_arn: "" => "<computed>"
last_modified: "" => "<computed>"
memory_size: "" => "128"
publish: "" => "false"
qualified_arn: "" => "<computed>"
reserved_concurrent_executions: "" => "0"
role: "" => "..."
runtime: "" => "python3.6"
source_code_hash: "" => "<computed>"
source_code_size: "" => "<computed>"
timeout: "" => "300"
tracing_config.#: "" => "<computed>"
version: "" => "<computed>"
module.lambda.aws_lambda_function.lambda: Still creating... (10s elapsed)
module.lambda.aws_lambda_function.lambda: Still creating... (20s elapsed)
module.lambda.aws_lambda_function.lambda: Still creating... (30s elapsed)
module.lambda.aws_lambda_function.lambda: Still creating... (40s elapsed)
module.lambda.aws_lambda_function.lambda: Still creating... (50s elapsed)
module.lambda.aws_lambda_function.lambda: Still creating... (1m0s elapsed)
module.lambda.aws_lambda_function.lambda: Still creating... (1m10s elapsed)
module.lambda.aws_lambda_function.lambda: Still creating... (1m20s elapsed)
module.lambda.aws_lambda_function.lambda: Still creating... (1m30s elapsed)
Error: Error applying plan:
1 error(s) occurred:
* module.lambda.aws_lambda_function.lambda: 1 error(s) occurred:
* aws_lambda_function.lambda: Error creating Lambda function: timeout while waiting for state to become 'success' (timeout: 1m0s)
Use aws_lambda_function with filename pointing to a large file, and try to apply it.
A workaround is to first create the function with a small file and then modify it to use the big file, as per this comment in an unrelated issue.
Have the same issue but with binaries 2.9mb in size
laptop-A0126:project jack$ ls -lh artifacts/
total 12288
-rw-r--r-- 1 jack staff 2.9M May 28 22:23 preprocessing.zip
-rw-r--r-- 1 jack staff 2.9M May 28 22:23 postprocessing.zip
module.streams.aws_lambda_function.lambda_l2a.0: Still creating... (1m30s elapsed)
module.streams.aws_lambda_function.lambda_l2a.1: Still creating... (1m30s elapsed)
module.streams.aws_lambda_function.lambda_l2a.1: Still creating... (1m40s elapsed)
module.streams.aws_lambda_function.lambda_l2a.1: Still creating... (1m50s elapsed)
module.streams.aws_lambda_function.lambda_l2a.1: Still creating... (2m0s elapsed)
module.streams.aws_lambda_function.lambda_l2a.1: Still creating... (2m10s elapsed)
module.streams.aws_lambda_function.lambda_l2a[1]: Creation complete after 2m19s (ID: prod-data-l2a)
Error: Error applying plan:
1 error(s) occurred:
* module.streams.aws_lambda_function.lambda_l2a[0]: 1 error(s) occurred:
* aws_lambda_function.lambda_l2a.0: Error creating Lambda function: timeout while waiting for state to become 'success' (timeout: 1m0s)
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
__Lambda Runtime__: go1.x
+1 The same problem here file with 4,6mb
+1
+1 with file size 11,3 mb
+1
Same issue: run into Error creating Lambda function: Error creating Lambda function: timeout while waiting for state to become 'success' (timeout: 1m0s) but when I look at the functions after terraform apply, the function exists and has been created. Also, my timeout = 300 seems to be ignored because terraform says: timeout 1m0s whereas it should be mins.
+1. I'm in a place with a slower Internet connection, and can't get a Lambda function to upload before it times out 🙁
+1, 2.9mb file
@mitchellh Please help to assign someone, thank you 10 times.
+1
same here failed create function with different error message :
InvalidParameterValueException - Could not unzip uploaded file. Please check your file, then try to upload again.
I checked my zipfile is only 800kBs. It's weired if the issue is from the slow network because I succeed create function with 300kBs zipfile. Also I succeed upload large files more than 100MB to several storage like S3, Google drive, or upload video to youtube normally.
What's the exact problem here, very frustated with the issue without any reference or related documentation? I am using AWS class for nodejs to deploy aws
The only work around is to first upload .zip to S3 and get lambda to use that
The only work around is to first upload .zip to S3 and get lambda to use that
Yup I read the same solution in other link, but I'm not happy with this solution because of the zipfile is under Lambda max size 50MB right? I think, it's not proper solution where we need to deploy s3 also only to upload very small zipfile. In other words, why do we need S3 for the lambda::createFunction Code parameter if the zipfile is under 50MB?
Do you have other idea?
and
The only work around is to first upload .zip to S3 and get lambda to use that
Yup I read the same solution in other link, but I'm not happy with this solution because of the zipfile is under Lambda max size 50MB right? I think, it's not proper solution where we need to deploy s3 also only to upload very small zipfile. In other words, why do we need S3 for the lambda::createFunction Code parameter if the zipfile is under 50MB?
Do you have other idea?
Also, I just tried using s3 approach but same error :
InvalidParameterValueException - Could not unzip uploaded file. Please check your file, then try to upload again.
The details of my work here :
params = {
Description: "My Function",
FunctionName: "MyFunction",
Handler: "myfunction.handler",
MemorySize: 768,
Publish: true,
Role: my_iam_lambda_role_arn,
Runtime: "nodejs8.10",
Timeout: 30,
Code: {ZipFile: fs.readFileSync(my_zipfile_path)}
}
params = {
Description: "My Function",
FunctionName: "MyFunction",
Handler: "myfunction.handler",
MemorySize: 768,
Publish: true,
Role: my_iam_lambda_role_arn,
Runtime: "nodejs8.10",
Timeout: 30,
Code: { S3Bucket: my_s3_bucket_name, S3Key: my_zipfile_on_s3_bucket }
}
Other info I can share, perhaps can be help, my zipfile contents are :
myfunction
|_myfunction.js
|_node_modules/
|_ node_modules/sendmail_packages (npm install sendmail)
I use golang and had the zip file corrupt issue, until I used build-lambda-go.exe and then lambda was able to expand it.
Also, http proxy resulted in 80byte file upload and terraform shows success, which was wrong again.
I've split my code into smaller programs and have few zip files of smaller size which works ok, for now.
Pull request submitted to allow a configurable timeout (defaults to 10 minutes) for these slower uploads: #6409
Thank you @bflad. This is hugely valuable when dealing with large deployment packages or slow Internet connections.
Yay :+1:
The fix for this has been merged and will release with version 1.43.1 of the AWS provider shortly (likely next hour or so). 😄 Sorry for the previous troubles and happy to help!
This has been released in version 1.43.1 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
Pull request submitted to allow a configurable timeout (defaults to 10 minutes) for these slower uploads: #6409