Aws-cdk: Reduce number of parameters used by assets

Created on 29 Jul 2019  路  71Comments  路  Source: aws/aws-cdk

  • I'm submitting a ...

    • [ ] :beetle: bug report
    • [x] :rocket: feature request
    • [ ] :books: construct library gap
    • [ ] :phone: security issue or vulnerability => Please see policy
    • [ ] :question: support request => Please see note at the top of this template.
  • What is the current behavior?

CloudFormation stacks are limited to 60 parameters; CDK produces a seemingly excessive number of parameters, thus easy to breach limitation.

  • What is the expected behavior (or behavior of feature suggested)?

To perhaps instead use mappings, as suggested in the CF docs.

  • What is the motivation / use case for changing the behavior or adding this feature?

To be able to define, for example, more than 20 Lambda functions in a stack: currently 鈥撀爁or each function 鈥撀燙DK generates one parameter for its artifact hash, one for its S3 location, and one for its version.

@aws-cdassets @aws-cdaws-cloudformation @aws-cdcore efformedium feature-request in-progress p1

Most helpful comment

We are looking into improving this as part of our work on CI/CD. The current thinking is to actually reduce the number of asset parameters to zero by using a well-known convention-based physical names for the bootstrapping resources (bucket/ECR repository) and the source hash as the key (S3 object key/docker image tag). This will basically mean that we don't need any degrees of freedom during deployment. I am curious about people's thoughts on this...

All 71 comments

Hi @alexdilley,

Thank you for reaching out!
We are aware of this gap, and will address it when able. Someone will update this issue when that happens.

Hi @NGL321,

Thanks for your response.

We, Just started using CDK and ended up with this issue. We have 21 lambdas and the parameter count for it comes to more than 63. This makes it impossible for us to use CDK right now to deploy as cloudformation doesn't allow it. Anyway to make CDK inline the parameters (bucket/key/version hash) within the template directly instead of passing these as parameters ?

Current workound that I have been using is

  1. synthesizing the template with cdk
  2. then modifying the template to include s3 bucket and uri inline in the template instead of parameters
  3. then using aws cloudformation deploy to deploy instead of cdk deploy

Code snippet incase anyone else needs to use it:
https://gist.github.com/niranjan94/92f2636a29f09bd6cc53085951e78046

The long-term solution should be to have only a single parameter per CDK asset. It will still require a parameter per asset since asset locations change based on their source hash but it will reduce the number of parameters by a factor of x3

The long-term solution should be to have only a single parameter per CDK asset. It will still require a parameter per asset since asset locations change based on their source hash but it will reduce the number of parameters by a factor of x3

I do not see why all parameters cannot be placed into a map parameter. They must have unique keys/ids already since parameters cannot have duplicated names, correct?

I do not think the community would consider a stack limited to 30 assets to be the long term solution. 30 assets, assuming the users needed no parameters of their own (a pretty faulty assumption imho).

@sublimemm I think that would also be the best way; however I'm not sure how it could be a map. CloudFormation doesn't seem to support pulling out values from a map unless it's specifically a Mapping. Is there some function I'm missing?

What I've seen is having a long string as the parameter (joining all the parameters with a special character such as |) and then using a combination of !Split and !Select. CDK would have to maintain the appropriate indices for each asset with this solution.

@eladb You said the one parameter per asset would reduce the number of parameters by a factor of 3, but I think it will actually do much more. Our stack has 5 assets and between them 63 parameters. So reducing it to 5 would be a much more palatable solution.

It's unclear to me when the CDK decides something is a new asset or not... but I thought your original suggestion was one parameter per construct/lambda/etc. One per asset seems tenable, assuming the cdk is diligent with splitting the stack into assets.

@sublimemm I just looked at my template and it looks like it follows the 3 parameters per asset. Maybe you have something else adding parameters?

After some digging, it's all about the lambdas. Its adding 3 per lambda, we have tons of lambdas, there are many per asset.

image

Here are some examples, you can see they're duplicating the bucket (obviously not needed since all CDK stacks are deployed to the same bucket (the toolkit bucket)).

They're also duplicating parameters if multiple lambdas share an asset bundle (we have tons of them that share the same asset path).

We are looking into improving this as part of our work on CI/CD. The current thinking is to actually reduce the number of asset parameters to zero by using a well-known convention-based physical names for the bootstrapping resources (bucket/ECR repository) and the source hash as the key (S3 object key/docker image tag). This will basically mean that we don't need any degrees of freedom during deployment. I am curious about people's thoughts on this...

We are looking into improving this as part of our work on CI/CD. The current thinking is to actually reduce the number of asset parameters to _zero_ by using a well-known convention-based physical names for the bootstrapping resources (bucket/ECR repository) and the source hash as the key (S3 object key/docker image tag). This will basically mean that we don't need any degrees of freedom during deployment. I am curious about people's thoughts on this...

I think that is a great solution.

To add to this...
I'm hitting my max parameters (60) while using CDK assets for AppSync FunctionConfigs and Resolvers. I have about 33 resolvers and it looks like CDK creates 3 parameters per asset (as mentioned above). Asset parameters to zero would help me a ton!

I'm attempting to migrate from raw CF YAML templates to the CDK, immediately hit this issue. It generated 120 parameters first attempt. Immediately blocked by this. I just spent 3 days converting templates thinking "surely cdk handles maximum limits". Very very discouraging to see this happen before ever being able to deploy our structure. We have 10s of thousands of lines of config and I want to move the team away from that.

Would love to see some progress made toward this fix.

In the meantime, anyone have a workaround aside from the one above? Would re-structuring the hierarchy in some way help? I've tried to create extra layers but they then just pass the parameters through the layer and the problem remains.

@tekdroid yep. I tried restructuring too. But didn't work out. So I'm still using the workaround I told about above. (Inlining all the asset paths into the template) and using aws cloudformation deploy to deploy instead of cdk deploy

Updated script to do the inlining in case you are interested:

https://gist.github.com/niranjan94/92f2636a29f09bd6cc53085951e78046

@niranjan94 thanks for the updated script, I appreciate that! I'll check it out and see if I can use this method for now. Cheers!

@niranjan94 Your script is not working as expected for me. Would you mind helping me out? It removed all of the parameters, but didn't replace their usages.

Is there another medium by which we could chat for a moment? If you can't that's fine, I can reverse engineer this script.

Mainly I just wanted to see if I'm running the synthesis of the cdk output the same way you are. Looks like you're looking for Resource -> Properties -> Content, but I don't have a node at that path. I have Resource -> Properties -> TemplateURL, which is a join function for the s3 bucket/key parameters that were removed.

EDIT: No worries, after second thought I really don't want to go down this road. I'll continue our teams conversion when this is officially fixed inside the CDK.

@eladb I really like reducing the number to zero and using asset hashes. For the toolkit bucket, I would suggest an Fn::ImportValue instead of convention-based names to prevent name squatting attacks.

We can't use Fn::ImportValue because we need to know before deployment where to upload assets (see the cdk-assets RFC).

Copy @rix0rrr

Is there any estimates on when a fix for this could be out?
I'm trying to make a decision on whether to go with CDK in my team this year or maybe wait it out a bit more xD

Is this now possible with 1.22.0?

I have 8 lambdas and another 8 or so custom resource lambdas that push me over the 60 parameter limit.

I had to use the same asset for the code for all of my lambdas.

How are people getting around this until it is fixed?
I tried splitting stuff into multiple stacks and pointing to interstack dependencies through the fromArn methods, but this recreates each resource in the current stack and ends up clogging up the parameters anyway.

@eladb We are currently facing the same issue related to nested stacks.

For each nested stack we create, the CDK creates 3 parameters in the parent stack, thus limiting the total number of nested stacks you can use in a project.

I understand that the CDK team decided on a particular architecture around this part. I do believe this consists, in some sort, an abuse of the concept of parameters as the excessive usage by the CDK itself results in not being able to use parameters by the users of the CDK. Moreover, it creates unnecessary extra limits on top of the limits posed by CloudFormation.

To bypass this limitation, we currently combine resources in stacks more aggressively than we would typically do, which results in a less straightforward architecture of the project on our side.

Hoping to see some changes around this soon as I would consider this as a high prio functionality of the CDK that needs to be refactored.

I'm facing the same issue and I'm unable to push my stack now that I've reached the 200 resources (so, it is impossible to use a single stack) and the 78 parameters (so, it is impossible to use the nested stacks architecture, at least, using the CDK). Please raise the priority for this blocking issue, or give us a temporary workaround

@cbertozzi A workaround could be creating a separate stack and deploying them both. Note this does have the disadvantage of potentially one stack being successful and the other rolling back. This might or might not be an issue depending on how they're split up.

Another option could be to have your code assets bundled together so that you don't have so many parameters generated. I don't think this will address the 200 resources issue, but it would address the parameter one.

Another option is: use SAM. /snark

thanks @dehli for the quick reply. Your second option was the our first solution, but we reached the 200 resources limit.
About the first one, I think that we don't have much more solution to explore, I don't like so much having many separate stacks also because we have some refs to share between them and I don't exactly know how to use CDK to produce refs to export from one stack and import in another stack.

Sharing is pretty easy with CDK! You can use Exports and Fn::ImportValue to share across stacks (or you can always use regular parameters which probably won't work since you're running low on them).

The CDK code would look something like this:

import { CfnOutput, Fn } from "@aws-cdk/core";

// In your stack that is exporting a variable
new CfnOutput(this, "TableName", {
  exportName: "ProductionTableName",
  value: "production-table" 
});

// In your stack that is importing a variable
const tableName = Fn.importValue("ProductionTableName");

Also running into this issue after trying to migrate a stack with 19 lambdas to CDK. Added two more resources and are now at 63 parameters which is blocking deployment. We will be looking into the work arounds, but it's pretty annoying our cloud development kit creates invalid cloud infrastructure files.

@thomascclay sorry to hear about your experience. We are absolutely aware of this issue and we are working to refactor how assets are addressed so assets will not need any parameters. We expect this to be rolled out within the next couple of months.

In the meantime, have you tried splitting your stack into multiple stacks? The CDK will automatically wire any references across these stacks through imports/exports.

Any estimates for this? This is kinda showstopper for us.

We tried to split stack using multiple stacks (instead of NestedStacks) and this causes issues with deploying any change to resource that other stack depends from.

Eg. (replace S3 Bucket with any resource)

  • ParentStack has S3 Bucket with name "some-bucket"
  • ChildStack contain lambda that needs to be able to write to that S3 Bucket.

What happens when we try to change name of S3 Bucket?

  1. Creates new S3 Bucket
  2. Tries to remove Output from ParentStack
  3. Fails with error saying that it can't delete output that ChildStack depend from.

Hi,

We are also facing this same issue. I have 23 lambdas as of now and this is just a start for us. we could potentially be writing 75+ lambdas and generating 3 parameters per lambda is not helping.
Can you please provide an estimate for this issue to be fixed. As i see this issue was raised 10 months back.

Yes, I do want to update on the proposed workaround of splitting stacks.

Firstly, the change from serverless to cdk was supposed to be one-to-one as far as resources were concerned, so combining lambdas (and the redesign that would require) was out of scope.

Splitting the stack up into parts ran us into more headaches when it came time to deploy and update the stack due to all the dependencies (similar to jansav's issues). Splitting the stack up created more of a headache, so I don't think this is a good recommend workaround.

We expect this to be rolled out within the next couple of months.

Here we are two months later. My team will be doing another push to use CDK, but affording some time for a redesign and elimination of lambdas. It would be very good news to us if you said this issue was being fixed on the next release of CDK, as it would save us time and concerns.

Ran into this issue as well while trying to migrate an API Gateway with 20 lambda-backed endpoints. We are unable to deploy because we are getting the following error:

Error [ValidationError]: Template format error: Parameter count 64 is greater than max allowed 60

This limit needs to be well documented as it easy to run into, and plenty of development time has been spent trying to convert to using CDK only to run into this issue when doing a final deployment.

Is there an update on when parameters will be going away for assets?

So uh... The story so far:

At first, we were quite ecstatic doing our modest deployments using a single CloudFormation stack.

And all was good for a while.

After said while had passed however, we hit the AWS CloudFormation limit of 200 resources, and were forced to start splitting stuff more or less arbitrarily in nested stacks. This we did, and all was ok for a while, despite:

  1. Longer deployment times.
  2. Higher complexity.
  3. Some logging for deployment being obscured in nested stacks.
  4. Requirement to migrate eg. DynamoDb data around manually, as relocating existing tables in new stacks cannot be done using CDK, AFAIK.

But then we hit the AWS CloudFormation limit of 60 parameters, which CDK uses internally for establishing dependencies between nested stacks. We then migrated to ordinary stacks instead (as suggested eg. in this thread by @eladb), as ordinary stacks use CloudFormation import and export for dependencies, and thus do not suffer from said limitation. Having done this, all seemed bearable despite:

  1. Longer deployment times.
  2. Higher complexity.
  3. Another manual requirement for DynamoDb migration.

...but most regrettably, the arrangement now appears to prevent us from changing mostly anything (lambdas, buckets, etc.), as deployments fail pretty much as @jansav expertly describes. The impact of this is that our production deployment cycle, which we originally were happy to describe as continuous, is now something but. Also our development is taking a hit, as the easiest way to deploy to a test/development environment is to laboriously recreate instead of just update.

All this being said, we do think AWS CDK has the right idea. Only if this one quirk gets fixed, things could go back to that awesomeness we started with :)

So uh... Keep it up? Thanks.

@Iku-turso That's exactly been the case with me as well. Now i have split those nested stacks into individual stacks and have had some public variables in one stack to pass it through to another stack through props.

@eladb just looked over the ecr-assets merge. Looks great https://github.com/aws/aws-cdk/pull/5733. It this the basic idea of how we want to handle s3 assets as well (nested-stacks resolving into s3 assets with deterministic name (hash))

@eladb just looked over the ecr-assets merge. Looks great #5733. It this the basic idea of how we want to handle s3 assets as well (nested-stacks resolving into s3 assets with deterministic name (hash))

@Nestor10 Please elaborate? :)

@Iku-turso That's exactly been the case with me as well. Now i have split those nested stacks into individual stacks and have had some public variables in one stack to pass it through to another stack through props.

@anandakrishnan-a I'm having severe second thoughts on this ordinary stacks in place of nested stacks, as the overhead in development and deployment is too much. I feel it's better to take a design hit with something described by @dehli, just to play time until the root cause gets fixed.

Or maybe we'll be limiting the amount of dependencies artificially to satisfy the arbitrary limitations, for a while at least. Deployment reliability is gold?

@eladb just looked over the ecr-assets merge. Looks great #5733. It this the basic idea of how we want to handle s3 assets as well (nested-stacks resolving into s3 assets with deterministic name (hash))

Yap

Sorry, closed in error.

For anyone that is interested my solution, in python, I create a number of "stub" stacks. Each stub was passed a list of parameters that defines the nested stacks to be created in said stub. I limit each stub to have 15 nested stacks.

substacks_per_stub = 15
for i in range(0, len(conf_list), substacks_per_stub):
    nested_stack_group = conf_list[i:i + substacks_per_stub]
    StubStack(groups_to_deploy=nested_stack_group)

The stub only iterates over passed list and generates a nested stack per item.

for item in list_of_items:
   (Instantiate_nested_stack(needed_param=item))

There are a couple of benefits, this first being speed of deploy because we have passed concurrent execution off to AWS. The other is that very little of my code had to change in order to implement this.

We've just hit this in during dev effort. Shipping to client at the end of July, but are currently stuck despite splitting things up into multiple nested stacks.. Not sure how to proceed from here. Splitting into separate stacks is not an option (too much overhead). We're deploying to AWS as well as GovCloud, and hitting this limit breaks everything :/

For now, the workaround that I've implemented (which is extremely hacky) is to use the same code asset for all Lambda functions:

this.codeAsset = Code.asset("./dist");
const boxUploadFunction = new Function(this, "...", {
      description: "...",
      runtime: Runtime.NODEJS_12_X,
      handler: "folder/src.handler",
      code: this.codeAsset,
      timeout: Duration.seconds(15),
      memorySize: 128
});

We use Webpack on our Lambdas to package all dependencies into a single source file. Then each Lambda source file containing a handler is placed into its own directory. From there, we create a single code asset, and use that asset with all our Lambdas, specifying a specific folder / handler combination.

This is a terrible workaround, but it does reduce the number of parameters used, since each Lambda asset appears to use 3 parameters and we have over 20 Lambdas.

Hope this helps someone...

@AtomicCactus
We are running into the same issue, 63 params, to many Lambdas in our API Gateway.
Any change you can share your webpack code?
We either go hackyhack or split our gateways for now.

@mattiLeBlanc sure thing! Though this is with a caveat that we transpile our ES6 code to CommonJS using Babel, prior to invoking Webpack:

const path = require("path");
const CopyPlugin = require("copy-webpack-plugin");

// Lambda source directory.
const SRC_DIR = path.join(__dirname, "../../.compiled/aws/lambda");
const ASSETS_DIR = path.join(__dirname, "../../src/aws/lambda");

// Distribution / Build directory containing Lambda bundles.
const BUILD_DIR_PREFIX = "../../dist/aws";
const BUILD_DIR_SUFFIX = "aws/lambda";
const BUILD_DIR = path.join(__dirname, `${BUILD_DIR_PREFIX}/${BUILD_DIR_SUFFIX}`);

// This is effectively the file structure within the BUILD_DIR directory.
// It helps to keep it the same as source file structure.
// Each entry is the path to a Lambda function file, without the .js extension.
const ENTRIES = [
  "external/validate/input",
  ...
];

// Assets to copy. These can be binary files, lookup tables, etc.
// For Lambda functions they will be added to the appropriate build directory.
const ASSETS = [
  // No assets yet.
];

// These packages are excluded from being bundled.
// Node dependencies are automatically excluded, but AWS libs are not.
// We don't want to bundle AWS libraries with our code, since they're present in the Lambda runtime env.
const EXTERNALS = {
  "aws-lambda": "aws-lambda",
  "aws-sdk": "aws-sdk",
};

// Generate the entry dictionary.
const entry = ENTRIES.reduce((entryMap, lambdaName) => {
  entryMap[lambdaName.split("/").pop()] = `${SRC_DIR}/${lambdaName}`;
  return entryMap;
}, {});

// Generate the asset dictionary.
function getAssets() {
  const assets = ASSETS.map((asset) => {
    return { from: asset.from, to: `${BUILD_DIR}/${asset.to}`, transform: asset.transform };
  });
  return assets;
}

module.exports = {
  // Specify the entry point for our app.
  entry,
  resolve: {
    extensions: [".js", ".mjs"],
  },
  output: {
    filename: (chuckData) => `${BUILD_DIR_SUFFIX}/${chuckData.chunk.name.split("/").pop()}/[name].js`,
    library: "lambda",
    libraryTarget: "commonjs2",
  },
  target: "node",
  mode: "production",
  optimization: {
    // Minimization isn't a good idea if we want to read stack traces from Lambda.
    minimize: false,
  },
  plugins: [
    // Copies various data files to the appropriate location in the /dist folder.
    new CopyPlugin(getAssets()),
  ],
  externals: EXTERNALS,
  stats: {
    // Disable large file size warnings. We already know we have huge files.
    colors: true,
    hash: true,
    version: true,
    timings: true,
    assets: false,
    chunks: false,
    modules: false,
    reasons: false,
    children: true,
    source: false,
    errors: true,
    errorDetails: true,
    warnings: false,
    publicPath: false,
  },
  // devtool: 'source-map'
};

In our codebase, we first run this Babel command:

babel src --ignore src/web/ --out-dir .compiled/ --source-maps

This outputs files to the .compiled/ directory, from where Webpack picks them up and moves the final outputs to the dist/ dir.

Hi @NGL321 , when can we expect a fix in the SDK for this?
We have multiple stacks (appsync api, api gateways) where we are reaching the limits with our lambdas outputting 3 params per lambda. That is 20 lambdas per stack.

I am going to try to to combine our lamba's in groups of substacks which hopefully create subtemplates to work around to limit of 60 params.

@update: having our api gateway stack and 3 stacks with 3 sets of Lambda's actually worked to mitigate the issue for now.

@eladb Any update on this and if its going to be implemented shortly, looks like there are quite a few people stuck right now on it.

This should be marked as bug, not a feature request. I'd have to say that CDK can't be used in production until this is fixed.

@jansav depending on what your issue is, you can decide to spread you resources out over multiple stacks.
I created an API Gateway stack with 3 stacks of 8 Lambda's for example. The lambda's are linked to the root of the Gateway so it deploys the same way as if you registered the lambdas within the Api Gateway stack.
I know it is annoying but at least it can unblock you.

@mattiLeBlanc, that makes things even worse. As explained above. (https://github.com/aws/aws-cdk/issues/3463#issuecomment-627960857, https://github.com/aws/aws-cdk/issues/3463#issuecomment-621138384)

This issue has halted my project's infrastructure work in it's tracks. I agree with @jansav this is a bug, not a feature request.

@eladb is there an update on the expected release of the change that unblocks this and, if so, when can we expect this release?

@damonmcminn I am sorry to hear this is such a pain. We already have most components of this solution in place... Just ironing out some kinks. @rix0rrr, do you think we can perhaps provide pointers on how to enable our new asset system to unblock these folks?

@eladb thanks for the immediate reply! It is a pain - but an apology is not necessary (although appreciated regardless).

Totally understand ironing out kinks. If enabling the new asset system in my project is something that's feasible, then I'm all for attempting it.

Is there a rough idea when the kinks might be ironed out and the new asset system make it's way into a release?

For the record: very happy with CDK and greatly appreciate the effort put into it.

The new asset system is available properly starting at 1.45.0.

Easiest way to enable it is to put this into your cdk.json:

{
  "context": {
    "@aws-cdk/core:newStyleStackSynthesis": true
  }
}

You also have to bootstrap any environments you want to deploy into using the new version of the bootstrap stack:

$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
  --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
  aws://123456789012/us-east-2

This will create a new S3 Bucket and ECR Repository with a predictable name, which we will reference directly from the template without the use of asset parameters.

THIS IS GLORIOUS - thanks heaps! Works a charm.

Here's someone popping you all a well deserved beer:

image

ps. When this gets documented for general use, there should be a a big warning bit about --cloudformation-execution-policies as I initially left it off (why?!?!) and that left me to go through a tunnel of pain to return to the state CloudFormation was before I attempted it. Following your instructions worked perfectly.

Thanks so much!

@rix0rrr While I really love the new feature of no params and it really saves me a lot of pain, I am worried about the fixed name of the bucket to be a target of name squatting attacks. Somebody could just create buckets with this name for random account ids and the real owners could not use CDK as a consequence.

@rix0rrr Another issue: When using ContainerImage.fromAsset the generated CFN template looks like this:

            "Image": {
              "Fn::Sub": {
                "Fn::Join": [
                  "",
                  [
                    "<id>.dkr.ecr.eu-central-1.",
                    {
                      "Ref": "AWS::URLSuffix"
                    },
                    "/cdk-<my-asset-ecr>:<hash>"
                  ]
                ]
              }
            },

This is invalid CloudFormation as Join is not valid inside Sub:

E1019 Sub should be a string or array of 2 items for Resources/TaskDef3BF4F22B/Properties/ContainerDefinitions/0/Image/Fn::Sub

Am I doing something wrong or is this a bug in the new asset system?

EDIT: One more: the S3 Bucket Deployment construct does not work anymore as the custom lambda does not have permissions to use the KMS key of the asset bucket.

@rix0rrr Another issue: When using ContainerImage.fromAsset the generated CFN template looks like this:

            "Image": {

              "Fn::Sub": {

                "Fn::Join": [

                  "",

                  [

                    "<id>.dkr.ecr.eu-central-1.",

                    {

                      "Ref": "AWS::URLSuffix"

                    },

                    "/cdk-<my-asset-ecr>:<hash>"

                  ]

                ]

              }

            },

This is invalid CloudFormation as Join is not valid inside Sub:

E1019 Sub should be a string or array of 2 items for Resources/TaskDef3BF4F22B/Properties/ContainerDefinitions/0/Image/Fn::Sub

Am I doing something wrong or is this a bug in the new asset system?

EDIT: One more: the S3 Bucket Deployment construct does not work anymore as the custom lambda does not have permissions to use the KMS key of the asset bucket.

This seems like a bug. Can you please raise a separate issue?

THIS IS GLORIOUS - thanks heaps! Works a charm.

Here's someone popping you all a well deserved beer:

image

ps. When this gets documented for general use, there should be a a big warning bit about --cloudformation-execution-policies as I initially left it off (why?!?!) and that left me to go through a tunnel of pain to return to the state CloudFormation was before I attempted it. Following your instructions worked perfectly.

Thanks so much!

Mind raising a separate issue for that?

@rix0rrr will the new asset system be the default system ever or will we always need to adjust cdk.json?

THIS IS GLORIOUS - thanks heaps! Works a charm.
Here's someone popping you all a well deserved beer:
image
ps. When this gets documented for general use, there should be a a big warning bit about --cloudformation-execution-policies as I initially left it off (why?!?!) and that left me to go through a tunnel of pain to return to the state CloudFormation was before I attempted it. Following your instructions worked perfectly.
Thanks so much!

Mind raising a separate issue for that?

Absolutely.

After running the new bootstrap command, I'm getting the following error when doing cdk deploy

Could not assume role in target account (did you bootstrap the environment with the right '--trust's?): Roles may not be assumed by root accounts.

Any idea what I'm doing wrong?

EDIT: It was due to using root credentials in our CLI. Creating a IAM User resolved the issue.

@rix0rrr Is this documented in the CDK pipelines docs?

@rix0rrr I am having a similar issue:

Could not assume role in target account (did you bootstrap the environment with the right '--trust's?): User: arn:aws:sts::[accountID]:assumed-role/AWSReservedSSO_AdministratorAccess_ddb.../[USER]is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::[accountId]:role/cdk-hnb659fds-deploy-role-[accountId]-ap-southeast-2

So the first time I bootstrap and deploy, it works. But any sub-sequential deployment I get this error.

Only when I delete my S3 bucket and bootstrap again, can I deploy. So I can only deploy one time locally before I have to do this again.

However this wouldn't work for us in deployment because it would be madness to remove the staging bucket after each deploy.
I am kind stuck now with the option to either downgrade and create sub stacks for my more than 20 lambdas, or create a postdeploy cleanup that removes that staging bucket so we can do more than 1 deploy :-/

My CDK version is 1.49.1
We use SSO and have programmatic access for our local deployment via the aws credential file.

@NGL321 So we still have issues with deploying our stack (appsync api with lambda's, more than 20 = 60+ params) with the new CDK synthesis. The error is as mentioned in my prev post above.

Our accounts are DEV, STAGING And PROD and we have SSO users with read/write permissions and we use the ~/.aws/credentials file with our token to local deploy to our personal version of the API in the DEV account.
So when I was the first to use the new param system, I bootstrapped it with my account. This created/updated the CDKToolkit stack and added the S3 bucket to our DEV account.
All good...
But then when my colleague also started to use the new CDK he couldn't deploy, so he had to remove the CDKToolkit and bootstrap again. Now he could deploy...but surprise, I coulnd't again.
To make things even worse, when we pushed our changes to Bitbucket, the pipeline tries to deploy the DEV version of the API to the DEV account (in which we already bootstrapped), so it was complaining that the S3 bucket already exists:

StagingBucket Requested update requires the creation of a new physical resource; hence creating one.
  7/11 | 1:28:09 AM | UPDATE_FAILED        | AWS::S3::Bucket       | StagingBucket cdk-hnb659fds-assets-[account number]-ap-southeast-2 already exists
  5/11 | 1:28:09 AM | UPDATE_ROLLBACK_IN_P | AWS::CloudFormation::Stack | CDKToolkit The following resource(s) failed to update: [StagingBucket]. 
  5/11 | 1:28:12 AM | UPDATE_COMPLETE      | AWS::S3::Bucket       | StagingBucket 

So this becomes a bit hairy.
So:
A) should we move our all personal accounts to another env called USER so that DEV solely has one API and one bucket for deployment, and we have another bucket for our personal API deployment in the USER account?
B) should the bootstrap script be more robust to check if a bucket already exists so it doesn't have to recreate it?
C) why are we getting these trust issues. I am not sure if we can set a trust relationship on an SSO user. Is the issue because Bob bootstraps with DeveloperReadWrite SSO user and then when James deploys he has a different user signature so that bootstrapped bucket doesn't like him?

Our permission set for the developers includes "sts:*", so not sure what else is required to make it be trusted, and assume a role.

After running the new bootstrap command, I'm getting the following error when doing cdk deploy

Could not assume role in target account (did you bootstrap the environment with the right '--trust's?): Roles may not be assumed by root accounts.

Any idea what I'm doing wrong?

EDIT: It was due to using root credentials in our CLI. Creating a IAM User resolved the issue.

I'm using IAM user, but still has same error with you. Do I need new an IAM user in CLI?

Updated the CDK to use the new bootstrap and synth still shows 63 parameters. I'm uploading a bunch of files Assets and referring it using s3_url. Does the new bootstrap work for S3 Asset class as well?

For anyone who is struggling, here's how we made it work in our case:
We followed @rix0rrr 's suggestion and that helped mostly (thanks for your valuable suggestion). However, we faced some minor issues even after that.
After bootstrapping the environments successfully, we still hit the parameter limit. However, a quick update of the cdk CLI to the latest version (1.67.0) in our case resolved that limit.
Moreover, we wanted to deploy using a user other than the administrator. Cloudformation throws self explanatory errors in such cases. As a fix, we need to add the deployment user under consideration as a trusted entity to some of the roles created by cdk after bootstrapping the environments (these roles are clear for the cloudformation error messages).

Doing so made it work perfectly.

For anyone who is struggling, here's how we made it work in our case:
We followed @rix0rrr 's suggestion and that helped mostly (thanks for your valuable suggestion). However, we faced some minor issues even after that.
After bootstrapping the environments successfully, we still hit the parameter limit. However, a quick update of the cdk CLI to the latest version (1.67.0) in our case resolved that limit.
Moreover, we wanted to deploy using a user other than the administrator. Cloudformation throws self explanatory errors in such cases. As a fix, we need to add the deployment user under consideration as a trusted entity to some of the roles created by cdk after bootstrapping the environments (these roles are clear for the cloudformation error messages).

Doing so made it work perfectly.

Nice, our team solved this the same way and ran into similar issues. It would be nice to have a list of the exact permissions a role would need to have least privilege access to make this work.

For anyone who is struggling, here's how we made it work in our case:
We followed @rix0rrr 's suggestion and that helped mostly (thanks for your valuable suggestion). However, we faced some minor issues even after that.
After bootstrapping the environments successfully, we still hit the parameter limit. However, a quick update of the cdk CLI to the latest version (1.67.0) in our case resolved that limit.
Moreover, we wanted to deploy using a user other than the administrator. Cloudformation throws self explanatory errors in such cases. As a fix, we need to add the deployment user under consideration as a trusted entity to some of the roles created by cdk after bootstrapping the environments (these roles are clear for the cloudformation error messages).
Doing so made it work perfectly.

Nice, our team solved this the same way and ran into similar issues. It would be nice to have a list of the exact permissions a role would need to have least privilege access to make this work.

I completely agree. We usually wait for deployments to break before modifying the trust relationships. Helps keep the permission boundary as intact as possible.
Hopefully folks at AWS will document this soon.

Was this page helpful?
0 / 5 - 0 ratings