For example, in stack 1 we have dynamodb. in stack 2 we have a bunch of lambda's that reference that dynamodb (arn in policy, tableName in env params).
A good user experience would be that when i change something with the dynamodb (for example, remove a range key), i would be able to deploy. But what happens, is that it complains that the export cant be updated because stack2 is dependent on it.
So what i need to do is first comment out all dynamodb references in stack 2, deploy that with -e, then deploy stack 1, then uncomment all dynamodb references in stack 2 and deploy stack 2.
Can we eliminate those manual steps, let cdk handle the references in later stacks itself behind the scenes and just let me know when its all done.
Im running into similar issues.
I am deploying all my lambda versions in their own stack first, and then create my lambda functions in multiple other stacks later. I am unable to update layers without detaching them from all lambdas first.
Im getting an error saying that my layer stack cannot be updated as it is in use by other stacks. In these instances, would it be possible to keep the reference to the old version and add a new reference to the new one. And then later switch the lambdas over, when the lambda stacks are deployed?
This is a result of how CloudFormation behaves in face of cross stack references. When a stack imports a value from another stack, CloudFormation blocks any updates to the export since there is currently no mechanism to automatically cause the consuming stack to be redeployed with the updated value.
Given this is how CloudFormation behaves, and I am not aware of any plans to change this behavior, let's explore what can be done to make this less painful.
@binarythinktank I am curious, why is the range key exported? What's the export value? Can you share some code?
@Lightning303 in your case, I am wondering if the preferred behavior would be to deploy a new stack for your layers instead of updating it. For example, if you only change the stackName of the layers stack and "cdk deploy" your app. I think it would just work. The lambda versions will now point to the new layers stack and then you can safely delete the old layers. Kind of like "traffic shifting".
As a general note, one thing one should consider is leveraging nested stacks instead of "sibling stacks". Nested stacks have an intrinsic dependency order (the nested stack is always deployed before it's parent) and therefore CloudFormation allows references from/to nested stacks to updated. I am not 100% sure this works for your use case, but sometimes nested stacks are easier to manage across updates.
@Lightning303 yes, this is also an issue I have been running into that seems to be similar in cause and cludge-fix.
@eladb range key isnt exported, but removing it i think means that the ddb has to be recreated and this fails because of the dependencies.
Would a possible solution be to build the human cludge actions into cdk?
These are the steps i go through to manually deal with the current situation:
Is there maybe a more clever way, for example by not directly depending on DynamoDB outputs (e.g. tableName) but instead delegating them through ParameterStore to their consumers (e.g. lambdaFunctions)?
Has anyone had any success with a viable solution that doesn't need the steps @binarythinktank listed?
@eladb I'm interested in how, for example, an AppSync+DynamoDB+LambdaResolvers stack would look like with nesting. I think it would be helpful to have some guidance/best practices on this matter. For now, a really rough hierarchical list of stacks for the above scenario would help me a lot. Thanks :)
Most helpful comment
Is there maybe a more clever way, for example by not directly depending on DynamoDB outputs (e.g. tableName) but instead delegating them through ParameterStore to their consumers (e.g. lambdaFunctions)?
Has anyone had any success with a viable solution that doesn't need the steps @binarythinktank listed?