Terraform-provider-aws: Subnet changes in aws_eks_cluster forces creation of new resource

Created on 30 Nov 2018  ·  8Comments  ·  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version


Terraform v0.11.10
provider.aws v1.45.0
provider.template v1.0.0

Affected Resource(s)


aws_eks_cluster

Terraform Configuration Files


cluster.tf:

resource "aws_eks_cluster" "cluster1" {
  name            = "${var.cluster1_name}"
  role_arn        = "${aws_iam_role.cluster1.arn}"

  vpc_config {
    security_group_ids = ["${aws_security_group.cluster1.id}"]
    subnet_ids         = ["${var.cluster1_subnet_ids}"]
  }

  depends_on = [
    "aws_iam_role_policy_attachment.cluster1_1",
    "aws_iam_role_policy_attachment.cluster1_2"
  ]
}

variables.tf:

variable "cluster1_subnet_ids" { type = "list" }

terraform.tfvars:

cluster1_subnet_ids = [
  "subnet-1...",
  "subnet-2...",
  "subnet-3..."
]

Expected Behavior

It seems that changes in subnet configuration shouldn't force the creation of new cluster.
At least there is no mention of that in AWS documentation:
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-eks-cluster-resourcesvpcconfig.html#cfn-eks-cluster-resourcesvpcconfig-subnetids
CloudFormation docs tell that "Update requires: No interruption"

Actual Behavior

Terraform wants to create new cluster and destroy the old one:

-/+ aws_eks_cluster.cluster1 (new resource required)
      ...
      vpc_config.0.subnet_ids.#:                  "3" => "2" (forces new resource)
      vpc_config.0.subnet_ids.2044043540:         "subnet-2..." => "" (forces new resource)
      ...
Plan: 1 to add, 0 to change, 1 to destroy.

Steps to Reproduce

  1. Remove one of the subnets in the above configuration
  2. Run terraform plan
enhancement serviceks

Most helpful comment

There is now an API to update the VPC config, and a corresponding go SDK function UpdateClusterConfig

We're only using it for the private / public endpoint stuff, however it supports updating the subnets and security groups as well:

https://github.com/terraform-providers/terraform-provider-aws/blob/b7fa69a80b66154d8ee72f101c13ebae290a7402/aws/resource_aws_eks_cluster.go#L299-L303

Can we reopen this now that there is API support?

All 8 comments

Hi @akonokhov 👋 Since EKS' launch, there has not been an update API call available for EKS Clusters, and that still appears to be the case today: https://docs.aws.amazon.com/eks/latest/APIReference/API_Operations.html

Either CloudFormation has access to a private API call, or more likely, the CloudFormation documentation currently only lists the replacement requirement in the AWS::EKS::Cluster documentation under ResourcesVpcConfig:

ResourcesVpcConfig
The VPC subnets and security groups used by the cluster control plane. Amazon EKS VPC resources have specific requirements to work properly with Kubernetes. For more information, see Cluster VPC Considerations and Cluster Security Group Considerations in the Amazon EKS User Guide.

Required: Yes

Type: EKS Cluster ResourcesVpcConfig

Update requires: Replacement

If you can point to how this can be accomplished via the public API, we can certainly implement it, but as far as I know, its not currently possible to perform an in-place update of that information.

Hi @bflad
Thanks for the response. It seems that you're right and it's not possible to perform an update.
AWS documentation is quite confusing :)

AWS::EKS::Cluster
-- EKS Cluster ResourcesVpcConfig
---- Update requires: Replacement

AWS::EKS::Cluster
-- EKS Cluster ResourcesVpcConfig
---- SubnetIds
------ Update requires: No interruption

Thank you for spending time to check this. I'm closing the issue.

There is now an API to update the VPC config, and a corresponding go SDK function UpdateClusterConfig

We're only using it for the private / public endpoint stuff, however it supports updating the subnets and security groups as well:

https://github.com/terraform-providers/terraform-provider-aws/blob/b7fa69a80b66154d8ee72f101c13ebae290a7402/aws/resource_aws_eks_cluster.go#L299-L303

Can we reopen this now that there is API support?

Can we reopen this now that there is API support?

@bflad? :-)

"Important
At this time, you can not update the subnets or security group IDs for an existing cluster."

:(

Looks like the API will take it as parameters, but then throw you an error if you try to do it.

It appears that subnets that are provisioned at cluster creation time are subnets that the control plane supports. This does not mean that worker node subnets apply to this requirement. Subnets tagged withkubernetes.io/cluster/${cluster-name}=shared can support worker nodes to the cluster (in the same VPC) if this is what you are attempting to do is alter (grow) the subnet to support more IP's as most do.

I will agree the documentation is kind of bad, it might make sense to keep the addressing space scope tight for the control plane and manage worker node subnets separately in terraform to avoid this resourcing constraint. Changes to the subnets for the control plane should be rare, if ever.

If you indeed need to move the control plane, we need to wait for support of this via AWS as this currently cannot be changed. Unfortunately for us we were unaware of what EKS subnets actual roles were.

The AWS EKS API Reference for UpdateClusterConfig still has the following note:

At this time, you can not update the subnets or security group IDs for an existing cluster.

Since the upstream API does not support changing this, neither can Terraform, so closing this issue. Please contact AWS Support or your TAM to raise this feature request with the EKS service team.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings