Note: for support questions, please first reference our documentation, then use Stackoverflow. This repository's issues are intended for feature requests and bug reports.
I'm submitting a ...
What is the current behavior?
If the current behavior is a :beetle:bug:beetle:: Please provide the steps to reproduce
stack_1 = FirstStack(
app=app,
id='FirstStack
)
stack_2 = SecondStack(
app=app,
id='SecondStack',
construct_from_stack_1=stack1.some_construct
)
This causes a dependency via stack output. When I decide not to use construct_from_stack_1 anymore (by deleting its usage from stack_2), the stack_2 fails to update - for instance:
eks-dev
eks-dev: deploying...
eks-dev: creating CloudFormation changeset...
0/1 | 12:13:45 | UPDATE_ROLLBACK_IN_P | AWS::CloudFormation::Stack | eks-dev Export eks-dev:ExportsOutputFnGetAttEksElasticLoadBalancer4FCBC5E7SourceSecurityGroupOwnerAlias211654CC cannot be deleted as it is in use by ports-assignment-dev
โ eks-dev failed: Error: The stack named eks-dev is in a failed state: UPDATE_ROLLBACK_COMPLETE
The stack named eks-dev is in a failed state: UPDATE_ROLLBACK_COMPLETE
Looks like CDK tries to delete resource in wrong order - starting from the output first rather than its usage in dependent stacks and then from the souce stack itself.
What is the expected behavior (or behavior of feature suggested)?
Update removes resources that are no longer used
What is the motivation / use case for changing the behavior or adding this feature?
Life-time dependecies are created which prevents dependent stacks from being updated.
Please tell us about your environment:
Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. associated pull-request, stackoverflow, gitter, etc)
Observing similar behaviour with python based CDK, where I went for the Cfn* set of resources (for pure experimentation purposes):
The dependency has been declared - stack-lab-ecc depends on stack-lab-edu2.
When EC2 is commented out (diff):
The deploy fails trying to delete the subnet export from the first stack BEFORE deleting the EC2 instance from the second:
Why:
CDK CLI Version: 1.3.0
Python:
aws-cdk.cdk 0.36.1
aws-cdk.core 1.3.0
@rix0rrr this is the issue I meant. My current workaround for this is to create a "dummy" resource and attach the dependencies to that dummy resource. Something like this:
import cloudformation = require("@aws-cdk/aws-cloudformation");
...
// Get all subnet ids
const subnetIds = props.vpc.isolatedSubnets
.concat(props.vpc.privateSubnets)
.concat(props.vpc.publicSubnets)
.map(subnet => {
return subnet.subnetId;
});
// Create dummy cloudformation resource with all dependencies attached
const dummyWaitHandle = new cloudformation.CfnWaitConditionHandle(this, "DummyResource");
dummyWaitHandle.cfnOptions.metadata = {
dependencies: subnetIds
};
I'm encountering this as well, but with trying to update dependent stacks. In one example of my use case, I'm trying to separate the creation of ECS tasks from services. Ideally, I'd like to be able to destroy a service, without destroy the corresponding task (and its history).
By placing tasks and services in separate stacks, and just passing the relevant ref/arn information between stacks, I can accomplish destroying a service without the destroying the task, but I can't update the task stack, since I'm blocked by the "in use by services" error.
That's just one example. Overall, it helps from a code organization and re-usability standpoint for complex builds to separate and consolidate the creation of stacks according to the resources built. But the "in-use" dependency creates the need to consolidate complex builds into a single long, complex stack, with components that can't be reused, to ensure each component can be updated.
A little more information on the above...
I'm using Cfn functions almost entirely, as the default VPC creates a decidedly expensive (for my purposes, anyway) arrangement, and follow up components expect that default VPC.
I connect stacks, notably almost everything shares the CfnVPC I created, using the "Accessing Resources in a Different Stack" method outlined here: https://docs.aws.amazon.com/cdk/latest/guide/resources.html
In that section, it details that the method uses "ImportValue" to transfer information across stacks.
However, when making changes to a "child" stack which is exported, I run into the issue outlined here: https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-stack-export-name-error/
In that article, it essentially says you should replace the ImportValue function with direct resource references.
I may be missing something, but it doesn't seem possible to create a cross-stack CDK that outputs the imported value instead of the ImportValue function.
see #4014 for a feature request regarding that would solve this issue
This bug is caused by the automatic dependency resolution mechanism in the CDK CLI, which means that when you update stack_2
, it will automatically update stack_1
first (which obviously fails as stack_2
is still using the exported resource). The solution is really simple - just say cdk deploy -e stack_2
, which will update only stack_2
, and afterwards you can say cdk deploy stack_1
to clean up the unused export.
This will fail if you at the same time add something to stack_1
that is needed by stack_2
- in this case, stack_2
cannot be updated first, but neither can stack_1
because of the export. This is an obvious limitation with CloudFormation that has nothing to do with CDK, and the simplest way to avoid this is to just to do smaller changes.
The proper way to solve all problems like this is to use NestedStack
instead of Stack
. Automated support for that landed in 1.12.0, and that allows CloudFormation to handle this case correctly - first by creating all new resources in all stacks in dependency order, then updating all references and then only finally doing a pass to remove all the replaced resources.
Not sure what should actually be done about this in CDK - one solution would be to just add a note when a stack update fails to an export being used that "perhaps try updating stack with --exclusively
".
My two cents:
A tool to detect these conditions at build time (instead of deploy time) is possible, and would be a big help. Example workflow: (I made up some new commands):
# New command, that locks down interface at users's request:
# This file only contains Imports, Exports:
cdk shrinkwrap --all > my_production_interface.json
# user should checkin their interface:
git add my_production_interface.json && git commit -m "Added current deployment interface"
# Now, regular build will fail if current cdk output is not compatible with interface:
cdk build --interface my_production_interface.json
# FAIL
@nakedible given NestedStack
is now deprecated (as is all of the aws-cloudformation
package).. Do you know what is the correct way to solve this problem now?
This seems to be the most basic feature of dependencies. :/
As @nakedible said, one of the workarounds is splitting the deploy into two steps. The -e
flag must be used so CDK doesn't deploy all stacks. Here is an example of that.
# first step will remove the usage of the export
cdk deploy --exclusively SecondStack
# second step can now remove the export
cdk deploy --all
Most helpful comment
This bug is caused by the automatic dependency resolution mechanism in the CDK CLI, which means that when you update
stack_2
, it will automatically updatestack_1
first (which obviously fails asstack_2
is still using the exported resource). The solution is really simple - just saycdk deploy -e stack_2
, which will update onlystack_2
, and afterwards you can saycdk deploy stack_1
to clean up the unused export.This will fail if you at the same time add something to
stack_1
that is needed bystack_2
- in this case,stack_2
cannot be updated first, but neither canstack_1
because of the export. This is an obvious limitation with CloudFormation that has nothing to do with CDK, and the simplest way to avoid this is to just to do smaller changes.The proper way to solve all problems like this is to use
NestedStack
instead ofStack
. Automated support for that landed in 1.12.0, and that allows CloudFormation to handle this case correctly - first by creating all new resources in all stacks in dependency order, then updating all references and then only finally doing a pass to remove all the replaced resources.Not sure what should actually be done about this in CDK - one solution would be to just add a note when a stack update fails to an export being used that "perhaps try updating stack with
--exclusively
".