When creating a new stack and just trying to create a new EFS, I receive numerous errors:
fsmt-96f68d16 already exists in stack arn:aws:cloudformation:us-east-1:311209486165:stack/CodepipelineForEcrStack/9eee4d30-ee60-11ea-ab6f-12717722e021
Once for each subnet.
I am doing:
import * as cdk from '@aws-cdk/core';
import { Vpc } from '@aws-cdk/aws-ec2';
import { FileSystem, PerformanceMode, ThroughputMode } from '@aws-cdk/aws-efs';
export class CodepipelineForEcrStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// Import VPC
const vpc = Vpc.fromLookup(this, "CdkTestVPC", {
vpcId: "vpc-xxxxxxx"
});
// EFS
const _efsName = 'cdkTest';
const _efs = new FileSystem(this, 'CdkTestEFS', {
vpc: vpc,
performanceMode: PerformanceMode.GENERAL_PURPOSE,
throughputMode: ThroughputMode.BURSTING,
// fileSystemName: _efsName,
removalPolicy: cdk.RemovalPolicy.DESTROY, // Destroy EFS when we delete the stack
});
}
}
Then when I cdk deploy I get numerous errors about the file system mounts already exist.
During my numerous tests, I also was able to confirm this issue too.
I have manually checked and deleted both security groups and network interfaces but I still get these errors.
When the stack is rolled back, the security groups and the network interfaces persist. I have to manually delete them.
I am new to the CDK but I would expect that to launch the EFS into my existing VPC.
I was trying to create an EFS.
I was also hoping to control the subnets this mounted to but it seems to mount to all subnets as this is how many fsmt's it tried to deploy.
I received numerous errors:
fsmt-96f68d16 already exists in stack arn:aws:cloudformation:us-east-1:311209486165:stack/CodepipelineForEcrStack/9eee4d30-ee60-11ea-ab6f-12717722e021
One of these for each fsmt for each subnet.
Then the stack rollsback and it takes 2 hours to resolve (only on EFS rollback). I spent 14 hours today on 7 edits... literally. Found it was faster to delete the efs parts myself to speed up the time. Didn't find this out until just now...
I ran into this issue while working on my stack. I then decided to just try to create just the EFS and that's when I realized I was not able to do this.
At first, I thought it was due to me using a custom name. So I commented that out and it still happens.
In case this helps, I just commented out my stack and used the code I posted above. Since it was in the same stack just destroyed than deployed, I don't think this would be an issue but wanted to make note.
Also, this VPC already has 2 EFS mounts in it that are named something completely different.
Thank you.
This is :bug: Bug Report
When using this on a new VPC, this works as expected. This issue only seems to happen when using this on a VPC that already exists. I think this may have to do with the VPC having 2 EFS mounts already (built using Terraform). I will need to test more to validate this assumption but this is what I am leaning towards. My concern here is that when I want to use this in PROD for already existing systems, this will actually prevent me from using it.
I even tried to create my own mounts using CfnMountTarget but it still creates the mounts internally then it creates my custom mounts. I don't see a way to stop this from happening. I still get the fsmt exists error on a new stack using an VPC that has other EFS mounts.
UPDATED: Confirmed this issue seems to only happen on VPCs with multiple EFS mounts. Between it already exists in the stack or the az, this is stopping me from finishing my project.
Also updated to 1.62.0 with the same issues...
CLI Version : 1.62.0 (build 8c2d7fc)
Framework Version: 1.62.0
Node.js Version: v12.18.3
OS : Linux Zeus 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Language (Version): typescript (3.9.7)
Is this lacking details to identify the issue? I can work on a better example of the issue if need be. I am not confident in my ability to work on the core CDK as of yet. Still learning a bit but I can assist in identifying ways to duplicate this is what I provided is not sufficient.
At this point, I have to manually create EFS mounts as I cannot use them in the CDK since I have a VPC that has numeorus EFS mounts. Basically, any VPC that has more than 1 EFS mount when you try to create a new one with the CDK, it will error out.
@jpSimkins thanks for reporting the issue and all of the added detail! - apologies I just have not had a chance to go through the repro steps to get to the root cause and come up with a fix just yet.
I believe the information you've provided is more than enough (amazing really!) to get us going.
I'm also running into this issue. The stack is creating 6 file system mount points, but the stack tries to give them 3 physical ids, each id having 2 separate logical ids.
CloudFormation in the console looks like the following:
| Logical ID | Physical ID |
| --------- | ----------- |
| EfsMountTarget123456789 | fsmt-abc123 |
| EfsMountTarget456789abc | fsmt-abc123 |
| EfsMountTarget789abcdef | fsmt-def345 |
| EfsMountTargetabcdef123 | fsmt-def345 |
| EfsMountTargetdef123456 | fsmt-789abc |
| EfsMountTarget987654321 | fsmt-789abc|
If you look at the output of the stack using yarn cdk synth it is trying to create 6 mount targets, but it doesn't give any physical ids, so I'm not sure where it is assigning these.
CLI Version: 1.67.0
NodeJS Version: 12.18.3
OS: MacOS 10.14.6
Typescript: 3.9.7
I found a workaround, at least for my case. Because we had both public and private subnets in the same AZs, I had to change the EFS to only create mountpoints on the private subnets. This reduced the redundancy and allowed me to create the EFS without any issue:
const fileSystem = new efs.FileSystem(this, 'some-efs-identity', {
...,
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE,
onePerAz: true, // This might help as well.
},
});
Same problem here, VPC was already defined but no EFS were mounted. The SubnetSelection's onePerAz property set to true fixed the deployment.
bumping this up to a p1 to prioritize
Any update on this folks? We've a usecase for this.
@thesheps - picking this issue up this week
Mega - Thanks so much Shiv!!
On Tue, Nov 17, 2020 at 7:16 PM Shiv Lakshminarayan <
[email protected]> wrote:
@thesheps https://github.com/thesheps - picking this issue up this week
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/aws/aws-cdk/issues/10170#issuecomment-729144592, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AAWSTNYXF6G36H4C6EMTBOLSQLD2HANCNFSM4QW6YLBA
.
@shivlaks
I have fixed this issue with the following PR.
https://github.com/aws/aws-cdk/pull/12097
Most helpful comment
I found a workaround, at least for my case. Because we had both public and private subnets in the same AZs, I had to change the EFS to only create mountpoints on the private subnets. This reduced the redundancy and allowed me to create the EFS without any issue: