Terraform-provider-azurerm: Adding a taint to a azurerm_kubernetes_cluster_node_pool results in reprovisioning the pool

Created on 22 Sep 2020  ยท  6Comments  ยท  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform version: 0.12.26
AzureRM Provider version: 2.28.0

Affected Resource(s)

  • azurerm_kubernetes_cluster_node_pool

Terraform Configuration Files


First - provisioning without taint

resource "azurerm_kubernetes_cluster_node_pool" "mypool" {
  name                  = "mypool"
  kubernetes_cluster_id = azurerm_kubernetes_cluster.akscluster.id
  orchestrator_version  = var.aks-kubernetes-version
  vm_size               = var.mypool-vm-size
  enable_auto_scaling   = true
  lifecycle {
    ignore_changes = [node_count]
  }
  min_count             = 2
  max_count             = 3
  node_count            = 2
  vnet_subnet_id  = azurerm_subnet.dev-subnet.id
}

Second - adding the taint

resource "azurerm_kubernetes_cluster_node_pool" "mypool" {
  name                  = "mypool"
  kubernetes_cluster_id = azurerm_kubernetes_cluster.akscluster.id
  orchestrator_version  = var.aks-kubernetes-version
  vm_size               = var.mypool-vm-size
  node_taints                = ["mytaint=true:NoSchedule"]
  enable_auto_scaling   = true
  lifecycle {
    ignore_changes = [node_count]
  }
  min_count             = 2
  max_count             = 3
  node_count            = 2
  vnet_subnet_id  = azurerm_subnet.dev-subnet.id
}

Debug Output

Expected Behavior


The taint should just have been added (as if I had added it manually)

Actual Behavior


terraform plan reports that a replacement of the node pool will be forced.

      ~ node_taints           = [ # forces replacement
          + "mytaint=true:NoSchedule",
        ]

Steps to Reproduce

  1. Provision a cluster with one additional node pool. The node pool should not have any taints.
  2. Add a taint to the node pool e.g. node_taints = ["mytaint=true:NoSchedule"]
  3. Run terraform plan

The plan will now inform about the upcoming forced pool replacement.

question servickubernetes-cluster upstream-microsoft

All 6 comments

Thanks for opening this issue. After checked, seems it cannot be added after created successfully per document. So it's API limitation.

@neil-yechenwei Thanks for your fast response.
Oh, I see.

But it appears that this makes the whole functionality of managing the taints from the AzureRM provider kind of useless, given that if you have to use, e.g. kubectl to maintain it anyway, you won't need to add it from Terraform in the first place.

So maybe an update to the API would be a solution. How/where do we request such a change?

@janlunddk as @neil-yechenwei has mentioned unfortunately this is an API limitation - when these were first introduced these could be updated, but that behaviour has since changed. Perhaps @jluk could confirm if there's plans to change that, so this field can be updated/is useful to expose in Terraform?

Raise an issue on Azure/azure-rest-api-specs#11137 for tracking.

๐Ÿ‘‹

Since this is an Azure API enhancement I'm going to close this issue in favour of the upstream issue: https://github.com/Azure/azure-rest-api-specs/issues/11137

Thanks!

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error ๐Ÿค– ๐Ÿ™‰ , please reach out to my human friends ๐Ÿ‘‰ [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings