The announcement [1] and the associated blog post [2] say that sam deploy now automatically creates the bucket for deploying Lambdas. However, even with the latest version of SAM CLI, we get the message
S3 Bucket not specified, use --s3-bucket to specify a bucket name or run sam deploy --guided
sam deploy with a template that includes a AWS::Serverless::Function resource, e.g. ECSClusterCapacityProviderFunction:
Type: AWS::Serverless::Function
Properties:
Runtime: go1.x
CodeUri: ./cfn/ecs-cap-provider
Handler: main
Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
Deploying with following values
===============================
Stack name : <redacted>
Region : None
Confirm changeset : False
Deployment s3 bucket : None
Capabilities : ["CAPABILITY_IAM", "CAPABILITY_AUTO_EXPAND"]
Parameter overrides : {<redacted>}
Initiating deployment
=====================
Property Location of DBSecretRotationApp resource is not a URL
Unable to export
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 265, in export
self.do_export(resource_id, resource_dict, parent_dir)
File "/usr/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 281, in do_export
uploaded_url = upload_local_artifacts(resource_id, resource_dict, self.PROPERTY_NAME, parent_dir, self.uploader)
File "/usr/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 151, in upload_local_artifacts
return zip_and_upload(local_path, uploader)
File "/usr/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 169, in zip_and_upload
return uploader.upload_with_dedup(zip_file)
File "/usr/lib/python3.7/site-packages/samcli/lib/package/s3_uploader.py", line 127, in upload_with_dedup
return self.upload(file_name, remote_path)
File "/usr/lib/python3.7/site-packages/samcli/lib/package/s3_uploader.py", line 76, in upload
if not self.force_upload and self.file_exists(remote_path):
File "/usr/lib/python3.7/site-packages/samcli/lib/package/s3_uploader.py", line 140, in file_exists
raise BucketNotSpecifiedError()
samcli.commands.package.exceptions.BucketNotSpecifiedError:
S3 Bucket not specified, use --s3-bucket to specify a bucket name or run sam deploy --guided
Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam deploy', 'duration': 2934, 'exitReason': 'ExportFailedError', 'exitCode': 1, 'requestId': '211449fb-48cf-4575-b535-31cf05947392', 'installationId': 'ac292a82-3c83-45d1-8166-de4a3afc3ac1', 'sessionId': '2f748e2a-7f6f-4cac-ae4f-f1da4cde5e72', 'executionEnvironment': 'CLI', 'pyversion': '3.7.5', 'samcliVersion': '0.39.0'}}]}
Telemetry response: 200
Error: Unable to upload artifact ./cfn/ecs-cap-provider referenced by CodeUri parameter of ECSClusterCapacityProviderFunction resource.
S3 Bucket not specified, use --s3-bucket to specify a bucket name or run sam deploy --guided
sam deploy should automatically create a bucket for deployment, as per the announcement.
sam --version: 0.39.0[1] https://aws.amazon.com/about-aws/whats-new/2019/11/aws-sam-cli-simplifies-deploying-serverless-applications-with-single-command-deploy/
[2] https://aws.amazon.com/blogs/compute/a-simpler-deployment-experience-with-aws-sam-cli/
Ah!
Running sam deploy --guided first (This will create the bucket for you) , and then following deploys being just sam deploy should work.
Is there a way to do that without --guided? We deploy all resources through CI/CD, and running something manually out of band leaves much to be desired..
@sriram-mv To reiterate what @dinvlad, is there a way to do this without --guided?
you can package it and deploy it.
"aws cloudformation package --template-file template.yaml --output-template-file packaged-template.yaml --s3-bucket S3BUCKET
"aws cloudformation deploy --template-file packaged-template.yaml --stack-name STACKNAME --capabilities CAPABILITY_IAM"
@huy9997 that's what we do currently, but it requires creating an S3 bucket first and granting proper access to it from the deployment role. I was wondering if we could avoid that altogether.
It should be connected to a s3 bucket ? As I added it to my package script
On Fri, Mar 27, 2020, 7:17 PM Denis Loginov notifications@github.com
wrote:
@huy9997 https://github.com/huy9997 that's what we do currently, but it
requires creating an S3 bucket first and granting proper access to it from
the deployment account. I was wondering if we could avoid that altogether.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/awslabs/aws-sam-cli/issues/1701#issuecomment-605380965,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AESNOFPZS52HNGUY5B6TYCDRJVM4DANCNFSM4KCGPMAQ
.
It should be connected to a s3 bucket
Yep, the one specified via --s3-bucket S3BUCKET in the first command.
Whats this for ?
I already linked my s3 bucket ?
On Fri, Mar 27, 2020, 10:20 PM Denis Loginov notifications@github.com
wrote:
It should be connected to a s3 bucket
Yep, the one specified via --s3-bucket S3BUCKET in the first command.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/awslabs/aws-sam-cli/issues/1701#issuecomment-605396977,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AESNOFK7YVTIHTTDYW5XHGLRJWCLBANCNFSM4KCGPMAQ
.
Yes, if you used this command in a package script, then you specified a bucket for --s3-bucket, from what I understand.
Im facing the same issue, but in my case, after I ran the --deploy for the first time, if you delete your bucket, it won't create it again. (I did this, because I wanted to change to a bucket with better naming instead of the autogenerated one).
Edit:
Looking for resources needed for deployment: Found!
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-XXXXXX
A different default S3 bucket can be set in samconfig.toml
This is not true. If you change the default S3 bucket in samconfig.toml, it won't get created nor used.
It seems pretty clear that AWS are looking to avoid fully automated deployment through SAM CLI, which seems reasonable enough
I do have a s3 bucket attached.
On Tue, Apr 14, 2020, 8:33 AM jayDayZee notifications@github.com wrote:
It seems pretty clear that AWS are looking to avoid fully automated
deployment through SAM CLI, which seems reasonable enough—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/awslabs/aws-sam-cli/issues/1701#issuecomment-613512945,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AESNOFPGVGLEZXEHP4AGZVLRMR63NANCNFSM4KCGPMAQ
.
OK i've found the problem:
If you delete the bucket and want it to create again, you have to delete your aws-sam-cli-managed-default stack before running it again.
I think some people are completely missing the point / issue @dinvlad is trying to convey. In order to package and deploy an application we have to first _manually_ create a bucket. It would be nice if we could define and reference a bucket in the SAM template or have the package process automatically create a bucket using the app name (or something similar). In the meantime I am using a workaround that works for my use case so maybe it will help others. Note that I am using TypeScript / Node.js so you may need to tweak.
In my package.json I have the following scripts defined:
"config": {
"region": "us-east-1",
"s3BucketName": "insert destination bucket name",
"template": "./template.yaml",
"outputTemplate": "./template.packaged.yaml",
"stackName": "insert destination stack name"
},
"scripts": {
"sam-create-bucket": "cross-var aws s3 mb s3://$npm_package_config_s3BucketName --region $npm_package_config_region",
"sam-package-src": "cross-var sam package --template $npm_package_config_template --s3-bucket $npm_package_config_s3BucketName --output-template-file $npm_package_config_outputTemplate --region $npm_package_config_region",
"sam-deploy-src": "cross-var sam deploy --template-file $npm_package_config_outputTemplate --stack-name $npm_package_config_stackName --capabilities CAPABILITY_IAM --region $npm_package_config_region",
"sam-deploy": "npm run build && npm run sam-create-bucket && npm run sam-package-src && npm run sam-deploy-src",
"transpile": "tsc",
"build": "npm run transpile && webpack-cli",
...
}
Few Notes:
1) If you don't use cross-var then simply remove the config section and hard code the values
2) If you're not using TypeScript then you can remove the transpile step
3) Remove the webpack as well if you're not using that
Once you have this you can simply run npm run sam-deploy and it will transpile the TypeScript to JavaScript, run webpack, create the destination s3 bucket, package the code and deploy the app.
I'm facing the same situation as described by @dinvlad: I'd like for SAM to automatically create a bucket, but without any interactive prompts. @sriram-mv any chance this issue can be reopened?
What's missing will be fixed by exposing the bootstrap command:
As it says in the source at aws-sam-cli/samcli/cli/command.py:
https://github.com/awslabs/aws-sam-cli/blob/79fe5cb293c251d2fc52ed2087f9dc9352e78fa1/samcli/cli/command.py#L22-L26
"We intentionally do not expose the
bootstrapcommand for now. We might open it up later"
Well, I found a workaround!
The trick is to use sam cli as a library:
from samcli.lib.bootstrap.bootstrap import manage_stack
bucket_name = manage_stack(your_profile, your_region)
But personally, I'd rather not work around a disabled feature.
This is also a problem when doing automated deployment using GitHub actions
It's ridiculous it has been closed, @sriram-mv has completely missed the point of the issue.
I am trying to add my lambda script file (index.js) in Codeuri via local path i.e " Codeuri: . " but it still asking me to give s3 bucketpath is there anyway to give local path without involving s3?
Thanks
I am trying to add my lambda script file (index.js) in Codeuri via local path i.e " Codeuri: . " but it still asking me to give s3 bucketpath is there anyway to give local path without involving s3?
Thanks
This is a little off topic for what this issue it about, but here is some information that will hopefully help you.
Let's say you're project structure is:
|- functions
|- my-function
|- index.js
|- my-other-function
|- index.js
|- layers
|- whatever
- template.yaml
In your SAM template you can have the following:
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/functions/my-function/
Handler: index.handler
...
MyOtherFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/functions/my-other-function/
Handler: index.handler
...
When you run the "sam package ..." command the resulting output template, template.packaged.yaml as an example, will have the S3 bucket information in there for you. So if you had the following sam package command:
sam package --template ./template.yaml --s3-bucket my-bucket --output-template-file ./template.packaged.yaml --region us-east-1
The resulting output template should be something like this:
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: s3://my-bucket/361917919f91fe7efbc7d5d7
Handler: index.handler
...
MyOtherFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: s3://my-bucket/a88b8f9f0f0a0a88770543a43a3
Handler: index.handler
...
You don't have to do anything with that other than pass it to the "aws cloudformation deploy ..." command and it will do the rest.
Had the same issue as I have deleted the bucket aws-sam-cli-managed-default-samclisourcebucket-xxxxxxx.
Deleting the cloudformation stack aws-sam-cli-managed-default and run sam deploy -g worked fine without errors (creating the CF stack with the bucket)
I could fix this issue by adding the flag --resolve-s3 to sam deploy
sam version: "1.9.0"
seems this issue is fixed, i can deploy through ci with sam deploy --no-confirm-changeset
version: 1.12.0
seems this issue is fixed, i can deploy through ci with
sam deploy --no-confirm-changesetversion: 1.12.0
Just out of curiosity, how does that help with what @dinvlad asked about and others that deploy all resources through CI/CD? When you have a large project that requires an S3 bucket (your YAML is too big as an example) and you have different stages (development, staging, and production) then it would be nice to have the entire process automated. Right now it's not since it's required that we manually create the S3 bucket first. That is the entire point of this issue.
The devs, and many others, are missing this point and not sure how it can be made clearer? This issue should have never been closed. They dropped the ball on this and it's a shame that they close it rather than actually fixing it.
Please check my answer. With the --resolve-s3 Flag sam searches for an S3 Bucket and if none is found a new one is created automatically
--resolve-s3
Using --resolve-s3 flag, personally I get
sam deploy --template-file template.yml --stack-name $PROJECT_NAME --capabilities CAPABILITY_IAM --no-confirm-changeset --resolve-s3
Error: Cannot use both --resolve-s3 and --s3-bucket parameters in non-guided deployments. Please use only one or use the --guided option for a guided deployment
EDIT: ok i found out why. That was because the s3-bucket property was set in the samconfig.toml file.
Seems to work when i either remove the related line or completely delete the file.
Most helpful comment
Is there a way to do that without
--guided? We deploy all resources through CI/CD, and running something manually out of band leaves much to be desired..