Terraform-provider-azurerm: "`pod_cidr` and `azure` cannot be set together" error even when pod_cidr is never set in Azure GOV

Created on 25 Oct 2019  ·  9Comments  ·  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.12
azurerm_1.35.0

Affected Resource(s)


azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "aks" {

  resource_group_name = azurerm_resource_group.rg.name
  location            = var.location

  name               = module.naming_conventions.aks_name["aks"]
  dns_prefix         = module.naming_conventions.endpoint_name["aks"]
  kubernetes_version = var.aks_kubernetes_version

  dynamic "agent_pool_profile" {
    for_each = local.aks_agent_pool_ṕrofiles
    content {
      name            = lookup(agent_pool_profile.value, "name", null)
      type            = lookup(agent_pool_profile.value, "type", null)
      count           = lookup(agent_pool_profile.value, "count", null)
      max_pods        = lookup(agent_pool_profile.value, "max_pods", null)
      vm_size         = lookup(agent_pool_profile.value, "vm_size", null)
      os_type         = lookup(agent_pool_profile.value, "os_type", null)
      os_disk_size_gb = lookup(agent_pool_profile.value, "os_disk_size_gb", null)
      vnet_subnet_id  = module.commandcenter_vnet.vnet_subnets[0]
    }
  }

  ## The service principal is needed to dynamically create and manage other Azure resources such as an Azure load balancer or container registry (ACR).
  service_principal {
    client_id     = data.vault_generic_secret.aks_secrets.data["ARM_CLIENT_ID"]
    client_secret = data.vault_generic_secret.aks_secrets.data["ARM_CLIENT_SECRET"]
  }

  addon_profile {
    oms_agent {
      enabled                    = true
      log_analytics_workspace_id = azurerm_log_analytics_workspace.aks_workspace.id
    }
  }

  network_profile {
    network_plugin     = var.aks_network_plugin
    network_policy     = var.aks_network_policy
    docker_bridge_cidr = cidrhost(cidrsubnet(module.commandcenter_vnet.vnet_subnet_cidrs[0], 1, 1), 2)
    dns_service_ip     = cidrhost(cidrsubnet(module.commandcenter_vnet.vnet_subnet_cidrs[0], 1, 0), 2)
    service_cidr       = cidrsubnet(module.commandcenter_vnet.vnet_subnet_cidrs[0], 1, 0)
  }

  tags = module.naming_conventions.tags
}

Debug Output

Panic Output

Expected Behavior

CNI azure should be set

Actual Behavior

CNI azure is not set even though pod_cidr is no set.

Error: pod_cidr and azure cannot be set together.

on aks.tf line 20, in resource "azurerm_kubernetes_cluster" "aks":
20: resource "azurerm_kubernetes_cluster" "aks" {

Steps to Reproduce

  1. Login to azure gov
  2. Create terraform with network plugin set to azure
  3. Do not specify pod_cidr
  4. Terraform apply

Important Factoids

  • We are using azure government

References

  • #0000
bug servickubernetes-cluster

Most helpful comment

@AshWilliams After tearing down the cluster and setting it up again I managed to get it working. Seems if the cluster is setup with a subnet, terraform defaults the CNI to kubenet and the pod_cidr is set. This can only be changed when setting up the cluster from scratch.

All 9 comments

@phalcon30964 Have you found a workaround for now as I face the same issue?

@phalcon30964 Same issue here. Did you found a workaround?

@AshWilliams After tearing down the cluster and setting it up again I managed to get it working. Seems if the cluster is setup with a subnet, terraform defaults the CNI to kubenet and the pod_cidr is set. This can only be changed when setting up the cluster from scratch.

This isn't specific to azure/government.

@AshWilliams After tearing down the cluster and setting it up again I managed to get it working. Seems if the cluster is setup with a subnet, terraform defaults the CNI to kubenet and the pod_cidr is set. This can only be changed when setting up the cluster from scratch.

Excellent. Simply tainting the resource and terraform was able to create it. Thanks so much!

You can't have pod_cidr set while trying to use the non kubenet cni's, I have a duplicate ticket to this #5165.

👋

There's been substantial changes to the AKS resources since this version of the Azure Provider was released (this issue is regarding v1.35 - we're currently on v2.4) - as such I'd suggest upgrading to the latest version of the Azure Provider.

If this isn't fixed by doing so then please let us know and we'll take another look - however since I believe this should be fixed by updating the version of the Azure Provider being used I'm going to close this issue for the moment.

Thanks!

This is not fixed in provider 2.5 - passing in network_plugin "azure" and pod_cidr "null" produces the error added in https://github.com/Azure/acs-engine/pull/3562/commits/ddeeb7df0ad62cc75b6ab214b07ac9af016e1d5a# because pod_cidr is not "", but passing in network_plugin "azure" and pod_cidr "" produces the error network_profile.0.pod_cidr must start with IPV4 address and/or slash, number of bits (0-32) as prefix. Example: 127.0.0.1/8. Got ""

So if we set pod_cidr at all, whether to null or to "", network_plugin can't be "azure". This means we can't use a dynamic block to have one module represent an aks cluster that can be set to use either plugin.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings