Terraform-provider-azurerm: the default value of load_balancer_sku is not standard if network_profile is not specified

Created on 17 Mar 2020  ·  5Comments  ·  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.23

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp
esource "azurerm_kubernetes_cluster" "example" {
  name                 = "myclustername"
  location             = var.location
  resource_group_name  = var.rg_name
  dns_prefix           = "myclusternamedns"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2_v2"
  }

  service_principal {
    client_id     = var.client_id
    client_secret = var.client_secret
  }

  tags = {
    Environment = "Production"
  }
}

output "client_certificate" {
  value = azurerm_kubernetes_cluster.example.kube_config.0.client_certificate
}

output "kube_config" {
  value = azurerm_kubernetes_cluster.example.kube_config_raw
}

Debug Output

Panic Output

Expected Behavior

The load balancer should be standard.

Actual Behavior

If we don't specify the network_profile, then the load_balancer_sku is set to basic.

Steps to Reproduce

  1. terraform apply

Important Factoids

References

  • #0000
bug servickubernetes-cluster

All 5 comments

hey @bingosummer

Thanks for opening this issue.

As @jluk has mentioned this issue's been fixed in version 2.0 of the Azure Provider where the default value's been changed from Basic -> Standard - as such you should be able to resolve this (for a new cluster) by upgrading to this release.

Since this should be fixed by updating to version 2.0 (or later - the current version is 2.3) of the Azure Provider I'm going to close this issue for the moment, but please let us know if that doesn't work for you and we'll take another look.

Thanks!

There is a bug somewhere. Either its the documentation or this provider, because I'm using 2.3.0 and I still have a Basic sku.

The documentation at states that the default load balancer sku is Standard:

load_balancer_sku - (Optional) Specifies the SKU of the Load Balancer used for this Kubernetes Cluster. Possible values are Basic and Standard. Defaults to Standard.

But its actually Basic:

image

I'm using this provider v2.3.0 and my azurerm_kubernetes_cluster resource is at https://github.com/rgl/terraform-azure-aks-example/blob/7600bc739a50a926f1fc309a04ba0baca8eeb1f4/main.tf#L97-L149

@tombuildsstuff The problem still exists in version 2.7.0. Documentation states that "Standard" is the default value. But it actually is "Basic" when network_profile section is not specified.

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings