Kops: Error while deleting cluster

Created on 18 Feb 2017  路  13Comments  路  Source: kubernetes/kops

Kubectl Version : V 1.5.2
KOPS version : Version 1.5.1

Description: I am launching the cluster with KOPS. But while deleting I am not able to delete cluster completely .

Following errors:

kops delete cluster cluster-name --yes

"Not making progress deleting resources; giving up"

Not all resources deleted; waiting before reattempting deletion\n\tsubnet:subnet-5e625706\n\troute-table:rtb-a2487ec5\n\tsecurity-group:sg-8ed6b6f6\n\tsecurity-group:sg-8dd6b6f5\nsubnet:subnet-5e625706\tstill has dependencies, will retry\nsecurity-group:sg-8dd6b6f5\tstill has dependencies, will retry\nsecurity-group:sg-8ed6b6f6\tstill has dependencies, will retry\nNot all resources deleted; waiting before reattempting deletion\n\tsecurity-group:sg-8dd6b6f5\n\tsubnet:subnet-5e625706\n\tsecurity-group:sg-8ed6b6f6\n\troute-table:rtb-a2487ec5"

Most helpful comment

I very frequently get the dhcp-options/vpc loop during deletion using 1.5.3. I can always delete the VPC manually in the console but not the DHCP options set, so my suspicion has been that kops is trying to delete dhcp-options before the vpc, but I'm not sure.

All 13 comments

Sorry about the problem. This typically happens if there is another resource, that kops doesn't delete. Sometimes correctly, sometimes incorrectly :-)

I'm guessing there is something in the subnet, or another instance. Did you make any changes to the cluster after installation? For example, I think it happens if you set up VPC peering.

In that case, we can just delete a k8s related tag from the resources manually, then re-run delete cluster. So kops understands the resource is deleted. At least I did so when I had to delete a cluster once while I share subnets with other non-k8s instances.

I personally think it'd be better if we can set some flag in cluster.spec something like shared:true to each subnet, vpc, and nat gateway etc. And if the flag is set, kops just deletes a tag from the resource when deleting a cluster (and just adds a tag to the resource when creating a cluster).

hi @justinsb and @rdtr Thank you for the response. Yes, I recognized that I added 80 HTTP rule in the security group. That is causing a problem. Now I ll be deleting that rule first then applying KOPS delete. Do we have any feature regarding deleting related resources with KOPS only?

@justinsb same here for me too and I am not sure if there was any resource we created manually here.

vpc:vpc-7d1f6e9b        still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
        dhcp-options:dopt-db9bd6bc
        vpc:vpc-7d1f6e9b

@Miyurz Please check if you are adding anything apart from KOPS. I added 80 HTTP rule and removed before deleting cluster. Please check if you added any similar rule.

To add some context, if you add anything to the vpc that kops does not maintain, you will not be able to delete the vpc.

Security groups
Igw
Vpn
Routes
Aws databases
Etc

Closing this as the original issue was resolved. If there's something more to discuss please create a new issue, or let me know if I have closed this incorrectly.

I very frequently get the dhcp-options/vpc loop during deletion using 1.5.3. I can always delete the VPC manually in the console but not the DHCP options set, so my suspicion has been that kops is trying to delete dhcp-options before the vpc, but I'm not sure.

It seems like an AWS bug, deleting the VPC and then the DHCP Options works.
manually trying to delete the options-set via the console same failure.

The dhcpOptions 'dopt-0fc3e233a107a092f' has dependencies and cannot be deleted.

I'm still get the same issue

+1

+1

If anyone by accident get into this - in my case those were my manually added Security Groups
Find in VPC section , look up the VPC id
In EC2 > Security group , search by VPC id and clean up custom sec groups attached to that VPC

Was this page helpful?
0 / 5 - 0 ratings