Often when building packages, there is need for a build step before code can be zipped. Eg. to gather dependencies that need to be packaged. Currently that means that this build should happen before aws cloudformation package
is run.
It would be nice if building could be done by the package
command. Eg. by using some hook.
My initial idea was:
aws cloudformation package --build-with ${command}
${command}
is run in that directoryThis isn't very flexible, but allows you to run make
or pip install -r requirements.txt -t .
or ./build.sh
, without leaving the files they create on disk. A big downside is that builds can't reuse the output of a previous build. An alternative could be:
aws cloudformation package --build-hooks pre=${command01}, post=${command02}
${command01} /path/from/template
is run. This must output a path to directory (or if nothing is in the output, the original path will be used)${command02} /output/form/command01
gets runThis is more flexible, and allows the user to decide if he wants to reuse build artifacts, but is harder to use.
I'd be happy to work on implementing this (time permitting), once there is a decision on the best approach.
cc @sanathkr
My thoughts:
aws cloudformation package
call?The package command doesn't fundamentally change anything inside the directories it zips up, so there's not really any distinction between running a build script before running package vs running it during. I'm not sure we can provide any added value by bundling in hooks, especially given templates which might contain multiple functions that need their own distinct build steps. Overall I'm not inclined to include this feature, though if there's some big gain that I'm missing I can reconsider.
To take a step back, this is one potential solution to an existing, fundamental problem with aws cloudformation package
. Currently, aws cloudformation package
does three things:
It is a pain point that these steps cannot be split apart or modified. The zip step only takes paths relative to the template directory, disallowing out-of-source builds. The upload step can't be hooked into to index uploads.
A fundamental question is, does the AWS CLI, along with SAM, intend to provide a fully-capable tool to develop Lambda functions? Without some kind of capability that would allow me to, for example, run pipenv install
during the call to aws cloudformation package
, I am always going to need some separate, 3rd party tooling to wrap the calls the AWS CLI, just in case I need to do a few extra things.
Rather than having to write a script to do a generic thing like install requirements i would love to see this became a part of the package command itself.
You can read the Runtime
property and if it's python and a requirements.txt file exists run pip install -r requirements.txt -t .
before zipping it.
Same goes for nodejs with a package.json
and a npm install
not sure how other runtimes work.
As an addition you could still allow a pre
and post
scripts, for example: to inject generic models that are in the local repo or inject a secret
hash or certificate/key
that cannot be committed but needs to be in the Lambda function, etc.
Most helpful comment
Rather than having to write a script to do a generic thing like install requirements i would love to see this became a part of the package command itself.
You can read the
Runtime
property and if it's python and a requirements.txt file exists runpip install -r requirements.txt -t .
before zipping it.Same goes for nodejs with a
package.json
and anpm install
not sure how other runtimes work.As an addition you could still allow a
pre
andpost
scripts, for example: to inject generic models that are in the local repo or inject asecret
hash orcertificate/key
that cannot be committed but needs to be in the Lambda function, etc.