Terraform-provider-azurerm: azurerm_kubernetes_cluster should use either service_principal OR identity.

Created on 3 Apr 2020  ·  10Comments  ·  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.24
azurerm v2.4.0

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "TerraAKSwithRBAC" {
/* service_principal {
    client_id         = var.K8SSPId
    client_secret     = var.K8SSPSecret
  }*/

  identity {
    type = "SystemAssigned"
  }
}

Debug Output

Error: Missing required argument

 Error: "service_principal": required field is not set

  on Modules/02_AKS_ClusterwithRBAC/Main.tf line 7, in resource "azurerm_kubernetes_cluster" "TerraAKSwithRBAC":
   7: resource "azurerm_kubernetes_cluster" "TerraAKSwithRBAC" {

Expected Behavior


The plan should succeed as the Identity is defined.

Actual Behavior

The module complains about service_principal not defined, when the alternate i.e. identity is provided. If I use both this works. However this is not acceptable as in AAD you either use SPN or Managed Identities (recommended as they are password less)

Steps to Reproduce

Remove or comment out the service_principal section in the azurerm_kubernetes_cluster resource.

/*service_principal {
    client_id         = <Client ID of the SPN>
    client_secret     = <secret of the SPN>
  } */

Add identity section (to replace the service_principal section

identity {
    type = "SystemAssigned"
  }

If both are enabled, then identity is created. However we want to stop using SPNs for the cluster.

  1. terraform plan

Important Factoids

As of March 19 2020. This feature is GA in AKS. https://github.com/Azure/AKS/issues/993

References

breaking-change servickubernetes-cluster upstream-microsoft

Most helpful comment

@heoelri sorry just seen this, thanks for taking a look - unfortunately since you've posted this comment the fix for this has been merged via #6095 - so thanks for taking a look, but the fix for this will ship in v2.5 of the Azure Provider in the near future.

Since this has been fixed via #6095 I'm going to close this issue for the moment - but this'll ship in v2.5 of the Azure Provider in the near-future.

Thanks!

All 10 comments

I would also take this opportunity to cover the below:

if an update that requires modification of the azurerm_kubernetes_cluster resource (without re-deployment) then check if that re-generates the resourceId of the "Managed Identity". If yes, this would break all downstream RBAC.

would expect TF to check if the identity would change or changed after terraform apply

Thank you @asubmani for bringing that up - i had exactly the same issue today. I'm able to deploy my AKS cluster once using managed identity successfull, but the second run will always fail as client_secret in service_principal is different.

  service_principal {
    client_id     = var.client_id # defaults to msi 
    client_secret = var.client_secret # cannot be null
  }
  identity {
      type = "SystemAssigned"
  }
~ resource "azurerm_kubernetes_cluster" "deployment" {
      ...
      ~ service_principal {
            client_id     = "msi"
          ~ client_secret = (sensitive value)
        }
    }
Error: Error updating Service Principal for Kubernetes Cluster "suitabletunaaks" (Resource Group "cet-rampup-weu"): containerservice.ManagedClustersClient#ResetServicePrincipalProfile: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Updating service principal profile is not allowed on MSI cluster."

  on kubernetes-cluster.tf line 10, in resource "azurerm_kubernetes_cluster" "deployment":
  10: resource "azurerm_kubernetes_cluster" "deployment" {

It would be great to not require service_principal when identity is set (and vice versa) and set client_secret to null and client_id to msi automatically. But it seems like this comes with some complexity.

I tried to solve it on my own but i don't have much experience with contributing to a terraform provider, yet. Here's my shot https://github.com/terraform-providers/terraform-provider-azurerm/compare/master...heoelri:patch-2 (@tombuildsstuff it would be great if you could take a look in case you haven't already finished it)

@heoelri sorry just seen this, thanks for taking a look - unfortunately since you've posted this comment the fix for this has been merged via #6095 - so thanks for taking a look, but the fix for this will ship in v2.5 of the Azure Provider in the near future.

Since this has been fixed via #6095 I'm going to close this issue for the moment - but this'll ship in v2.5 of the Azure Provider in the near-future.

Thanks!

This has been released in version 2.5.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.5.0"
}
# ... other configuration ...

I'm getting "Unsupported attribute" when using azurerm_kubernetes_cluster.example.kubelet_identity.object_id with azurerm 2.6.0.

Try using the following syntax azurerm_kubernetes_cluster.<name>.kubelet_identity.0.object_id

@heoelri azurerm_kubernetes_cluster.aks.kubelet_identity is empty list of object
The given key does not identify an element in this collection value.

Ok, wired. Have you deployed your cluster with MI enabled? Here's my config:

provider "azurerm" {
  version                     = ">= 2.6.0"
  [..]
}

resource "azurerm_kubernetes_cluster" "deployment" {
  [..]
  identity {
      type = "SystemAssigned"
  }
  [..]
}

resource "azurerm_role_assignment" "acrpull_role" {
  scope                            = data.azurerm_subscription.primary.id 
  role_definition_name             = "AcrPull"
  principal_id                     = azurerm_kubernetes_cluster.deployment.kubelet_identity.0.object_id 
  skip_service_principal_aad_check = true
}

@heoelri I got the issue. If you deploy AKS and role assignment for the first time together, you get the error I mentioned. Howerver, if you deploy aks and then deploy the role assigment, work fine. It seems to be a bug. Thanks! :-)

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings