Terraform-provider-azurerm: Support for AKS node_taints in default_node_pool again

Created on 6 Nov 2020  路  7Comments  路  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

I understand the background of #8982 , then node_taints is no longer possible to configure from v2.35. But, AKS API allow exception for CriticalAddonsOnly taint on system nodepools (or all) from the latest update.

https://github.com/Azure/AKS/issues/1833

This option is very useful and critical, so could you please consider support it again?

New or Affected Resource(s)

  • azurerm_kubernetes_cluster

Potential Terraform Configuration

  default_node_pool {
    node_taints = ["CriticalAddonsOnly=true:PreferNoSchedule"]
  }

References

  • 8982

  • question servickubernetes-cluster

    Most helpful comment

    @tombuildsstuff Any thoughts on this? I could re-add taints back with validation that allows CriticalAddonsOnly=true:NoSchedule only?

    All 7 comments

    @ToruMakabe Since I "broke" this. When I look at the linked issue - I'm not sure I follow whether 2020-09-01 API version will allow for that or they released a new API version?

    @favoretti According to the latest AKS release note, they allowed it without a new API version. https://github.com/Azure/AKS/blob/master/CHANGELOG.md#release-2020-10-26

    The release is rolling out now, and it is available in some regions such as Japan East. So, node_taints in default_node_pool has been successfully applied with azurerm v2.34.

    I wasn't sure if this was supposed to work with the latest 2.39.0 but it appears not:

    Error: expanding `default_node_pool`: The AKS API has removed support for tainting all nodes in the default node pool and it is no longer possible to configure this. To taint a node pool, create a separate one
    

    ...at least for australiaeast

    @dhirschfeld I wasn't able to do anything on this since deprecation, there was a chat on how to approach this, but didn't get further than that. If you could elaborate on your use-case for this - it might help HC folks to weigh in on re-introducing at least the taints that API allows for now.

    Also, what would be an argument against leaving default node pool small for kube-system purpose workloads and creating an additional one that one can taint with anything?

    Also, what would be an argument against leaving default node pool small for kube-system purpose workloads and creating an additional one that one can taint with anything?

    Isn't that the exact purpose of CriticalAddonsOnly=true:NoSchedule on the default node pool?

    i.e. a taint stops you from scheduling pods on a node (unless they have a tolerance). If there is no taint on the default node pool then IIUC there's nothing stopping user pods from being scheduled on that node pool which is what I'd like.

    I think you could get the same effect by defining an anti-affinity for the system node pool on every pod but that seems like a lot of boilerplate and if you forgot to do it then the pod could again be scheduled on a system node.

    Disclaimer: I'm not a k8s expert so I might be completely off track!
    just trying to apply best practice (separating system from user nodes/pods) as defined in:
    https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-advanced-scheduler

    @tombuildsstuff Any thoughts on this? I could re-add taints back with validation that allows CriticalAddonsOnly=true:NoSchedule only?

    @dhirschfeld

    I wasn't sure if this was supposed to work with the latest 2.39.0 but it appears not:

    For westeurope and 2.40.0 azurerm provider as well

    Error: expanding `default_node_pool`: The AKS API has removed support for tainting all nodes in the default node pool and it is no longer possible to configure this. To taint a node pool, create a separate one
    
    Was this page helpful?
    0 / 5 - 0 ratings