Aws-cli: Allow `aws cloudformation package` to run a build script before zip'ing

Created on 23 Mar 2018  路  5Comments  路  Source: aws/aws-cli

Often when building packages, there is need for a build step before code can be zipped. Eg. to gather dependencies that need to be packaged. Currently that means that this build should happen before aws cloudformation packageis run.

It would be nice if building could be done by the package command. Eg. by using some hook.

My initial idea was:

  • run aws cloudformation package --build-with ${command}
  • for every path in the template:

    • the full directory gets copied to a temporary directory

    • ${command} is run in that directory

    • the contents of the directory gets zipped

    • the zip is uploaded to S3

    • the temporary directory is deleted

This isn't very flexible, but allows you to run make or pip install -r requirements.txt -t . or ./build.sh, without leaving the files they create on disk. A big downside is that builds can't reuse the output of a previous build. An alternative could be:

  • run aws cloudformation package --build-hooks pre=${command01}, post=${command02}
  • for every path in the template:

    • ${command01} /path/from/template is run. This must output a path to directory (or if nothing is in the output, the original path will be used)

    • the contents of the directory that the command gave as output gets zipped

    • the zip is uploaded to S3

    • ${command02} /output/form/command01 gets run

This is more flexible, and allows the user to decide if he wants to reuse build artifacts, but is harder to use.

I'd be happy to work on implementing this (time permitting), once there is a decision on the best approach.

feature-request needs-discussion

Most helpful comment

Rather than having to write a script to do a generic thing like install requirements i would love to see this became a part of the package command itself.

You can read the Runtime property and if it's python and a requirements.txt file exists run pip install -r requirements.txt -t . before zipping it.

Same goes for nodejs with a package.json and a npm install not sure how other runtimes work.

As an addition you could still allow a pre and post scripts, for example: to inject generic models that are in the local repo or inject a secret hash or certificate/key that cannot be committed but needs to be in the Lambda function, etc.

All 5 comments

cc @sanathkr

My thoughts:

  • The build script should receive the actual source path, not a temporary copy, and should determine the paths for its output (as opposed to, say, taking the zip path as an input). The build script may want to leave its artifacts in/near the source so that other tools can also use the artifacts.
  • The build script should, or should be able to, perform the zipping itself. We've found that, in order to tell more precisely when code has changed, and therefore needs to be rebuilt, we actually need to farm out the build results to several directories, and then merge them into the zip file.
  • It's hard to make subprocess-based communication extensible; maybe the build script takes as input a path to write a JSON file with its output?
  • The build script should probably be called once per CodeUri, rather than once for the template with all the CodeUris, but could also maybe receive a correlation id that's the same for all invocations from a single aws cloudformation package call?
  • The "hooks" part seems like it could be orthogonal to the "build" feature. The post hook seems most useful, where it could, for example, index the uploaded package(s).

The package command doesn't fundamentally change anything inside the directories it zips up, so there's not really any distinction between running a build script before running package vs running it during. I'm not sure we can provide any added value by bundling in hooks, especially given templates which might contain multiple functions that need their own distinct build steps. Overall I'm not inclined to include this feature, though if there's some big gain that I'm missing I can reconsider.

To take a step back, this is one potential solution to an existing, fundamental problem with aws cloudformation package. Currently, aws cloudformation package does three things:

  1. Zips up directories
  2. Uploads those zips to S3
  3. Inserts the S3 locations into a template

It is a pain point that these steps cannot be split apart or modified. The zip step only takes paths relative to the template directory, disallowing out-of-source builds. The upload step can't be hooked into to index uploads.

A fundamental question is, does the AWS CLI, along with SAM, intend to provide a fully-capable tool to develop Lambda functions? Without some kind of capability that would allow me to, for example, run pipenv install during the call to aws cloudformation package, I am always going to need some separate, 3rd party tooling to wrap the calls the AWS CLI, just in case I need to do a few extra things.

Rather than having to write a script to do a generic thing like install requirements i would love to see this became a part of the package command itself.

You can read the Runtime property and if it's python and a requirements.txt file exists run pip install -r requirements.txt -t . before zipping it.

Same goes for nodejs with a package.json and a npm install not sure how other runtimes work.

As an addition you could still allow a pre and post scripts, for example: to inject generic models that are in the local repo or inject a secret hash or certificate/key that cannot be committed but needs to be in the Lambda function, etc.

Was this page helpful?
0 / 5 - 0 ratings