Description:
running aws cloudformation package
has slowed down significantly as the package got bigger. We have maybe 10 or so lambda functions and a bunch of other resources that add up to a 54MB package. We're using the go1.x runtime which means we have to package individual binaries for each lambda function (not sure if relevant).
Steps to reproduce the issue:
aws cloudformation package
Observed result:
CLI pauses for ~30 seconds after logging this output string:
Uploading to jleroux/9168faaec90f8d4ac935401e91b01f3f 54270852 / 54270852.0 (100.00%)
Expected result:
CLI does not hang after uploading package to S3.
@jabalsad Would it be possible if you could provide debug logs from what you are running? You can do so by appending --debug
to the command you ran. It is a little hard to tell what is causing the CLI taking a long time, but debug logs should give us a lot more clarity about what is taking an unexpected long time.
I am experiencing the same issue. There is an average of a 6s delay after File with same data already exists at XXXX
before the next line of debug logs appears. This adds up to a significant delay for templates with more functions. I could break the template into smaller templates, but that makes management and the release process much more tedious.
<path>
host:s3.amazonaws.com
x-amz-content-sha256:<hash>
x-amz-date:20180613T162428Z
host;x-amz-content-sha256;x-amz-date
<date>
2018-06-13 11:24:28,227 - MainThread - botocore.auth - DEBUG - StringToSign:
AWS4-HMAC-SHA256
20180613T162428Z
20180613/us-east-1/s3/aws4_request
216c0c54fb6c8c3ac4114a3521e37a92d3fcfe397203904ceef8fe4fdc91db2a
2018-06-13 11:24:28,227 - MainThread - botocore.auth - DEBUG - Signature:
<signature>
2018-06-13 11:24:28,227 - MainThread - botocore.hooks - DEBUG - Event request-created.s3.HeadObject: calling handler <function enable_upload_callbacks at 0x110f00cf8>
2018-06-13 11:24:28,227 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2018-06-13 11:24:28,276 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /<my path> HTTP/1.1" 200 0
2018-06-13 11:24:28,276 - MainThread - botocore.parsers - DEBUG - Response headers: {'content-length': '15564249', 'x-amz-id-2': '<id>', 'accept-ranges': 'bytes', 'server': 'AmazonS3', 'last-modified': 'Wed, 13 Jun 2018 15:22:04 GMT', 'x-amz-request-id': 'F18FB7A2076BC7EF', 'etag': '"<tag>"', 'date': 'Wed, 13 Jun 2018 16:24:29 GMT', 'x-amz-server-side-encryption': 'AES256', 'content-type': 'binary/octet-stream'}
2018-06-13 11:24:28,277 - MainThread - botocore.parsers - DEBUG - Response body:
2018-06-13 11:24:28,277 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x111956a90>
2018-06-13 11:24:28,277 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2018-06-13 11:24:28,277 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x111956ad0>>
2018-06-13 11:24:28,277 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x1110e3758>
2018-06-13 11:24:28,277 - MainThread - awscli.customizations.s3uploader - DEBUG - File with same data is already exists at 2b81ffe4a82434c6b02be1f7e94d1823. Skipping upload
<Major Delay>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function sse_md5 at 0x1109ee6e0>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function validate_bucket_name at 0x1109ee668>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <bound method S3RegionRedirector.redirect_from_cache of <botocore.utils.S3RegionRedirector object at 0x111956ad0>>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event before-parameter-build.s3.HeadObject: calling handler <function generate_idempotent_uuid at 0x1109ee320>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event before-call.s3.HeadObject: calling handler <function add_expect_header at 0x1109eeaa0>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event before-call.s3.HeadObject: calling handler <bound method S3RegionRedirector.set_request_url of <botocore.utils.S3RegionRedirector object at 0x111956ad0>>
2018-06-13 11:24:32,221 - MainThread - botocore.endpoint - DEBUG - Making request for OperationModel(name=HeadObject) (verify_ssl=True) with params: {'body': '', 'url': u'<my url>', 'headers': {'User-Agent': 'aws-cli/1.12.2 Python/2.7.12 Darwin/17.6.0 botocore/1.8.42'}, 'context': {'auth_type': None, 'client_region': u'us-east-1', 'signing': {'bucket': u'<bucket>'}, 'has_streaming_input': False, 'client_config': <botocore.config.Config object at 0x111904ad0>}, 'query_string': {}, 'url_path': u'/<my path>', 'method': u'HEAD'}
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event request-created.s3.HeadObject: calling handler <function disable_upload_callbacks at 0x110f00c80>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event request-created.s3.HeadObject: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x111904a50>>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event choose-signer.s3.HeadObject: calling handler <function set_operation_specific_signer at 0x1109ee230>
2018-06-13 11:24:32,221 - MainThread - botocore.hooks - DEBUG - Event before-sign.s3.HeadObject: calling handler <function fix_s3_host at 0x110866230>
2018-06-13 11:24:32,222 - MainThread - botocore.auth - DEBUG - Calculating signature using v4 auth.
2018-06-13 11:24:32,222 - MainThread - botocore.auth - DEBUG - CanonicalRequest:
HEAD
<path 2>
Note, the logs are from aws-cli/1.12.2 Python/2.7.12 Darwin/17.6.0 botocore/1.8.42
, but I immediately upgraded to aws-cli/1.15.37 Python/2.7.12 Darwin/17.6.0 botocore/1.10.37
and the issue still occurs.
I am experiencing the same problem. I guess the problem is this: package function first creates a package for each lambda function defined in template.yml. Then calculate the hash of them and then checks if s3 already has it. Checking s3 is quick. No problem there. But packaging every function separately is not nice. If every function had different CodeUri defined then it would make sense. But in my case all the functions reside in the same directory. My Globals section has "CodeUri: ./" . So all functions actually should receive the same .zip file from s3. However cloudformation is packaging 20 times for 20 different functions.
Solution could be: If a function CodeUri is same with a previous function the package file should be cached and shouldn't be recreated.
Maybe the problem is my approach to have same zip file for every function. Thats open to debate. I know serverless people recommend that functions should be separated even in separete repos. However in my simple scenario I have a project with many helper libraries and 20 different functions. All functions are in the same repo.
I ran into the same problem. I had 7 golang lambda functions and each had a binary size > 80mb.
I had all binaries in a single directory. Now I build the binary inside the folder for the function and point the handler to the binary name and the CodeUri to the folder of the lambda function.
it works like a charm. my binaries in lambda are now back to ~5mb size
makefile:
build:
@for dir in `ls handler`; do \
GOOS=linux go build -o handler/$$dir/main gitlab.com/PATH/handler/$$dir; \
done
```
template.yml:
Handler: main
CodeUri: handler/FUNCTION_DIR_NAME
add to .gitignire
handler/**/main
```
you dont want to version control your binaries.
Awesome thanks!
@jabalsad - Thanks again for reporting this issue. It appears it is no longer an issue so I'm closing due to inactivity. Let us know if there is any new feedback.
cc:
@Tornquist
@baranbaygan
@jimbo5644535
Reference: #3262#issuecomment-443998602
Most helpful comment
I am experiencing the same problem. I guess the problem is this: package function first creates a package for each lambda function defined in template.yml. Then calculate the hash of them and then checks if s3 already has it. Checking s3 is quick. No problem there. But packaging every function separately is not nice. If every function had different CodeUri defined then it would make sense. But in my case all the functions reside in the same directory. My Globals section has "CodeUri: ./" . So all functions actually should receive the same .zip file from s3. However cloudformation is packaging 20 times for 20 different functions.
Solution could be: If a function CodeUri is same with a previous function the package file should be cached and shouldn't be recreated.
Maybe the problem is my approach to have same zip file for every function. Thats open to debate. I know serverless people recommend that functions should be separated even in separete repos. However in my simple scenario I have a project with many helper libraries and 20 different functions. All functions are in the same repo.