Terraform v0.11.7
+ provider.azurerm v1.13.0
provider "azurerm" { }
resource "azurerm_resource_group" "test" {
name = "acctestRG1"
location = "East US"
}
resource "azurerm_azuread_application" "test" {
name = "acctestRG1"
}
resource "azurerm_azuread_service_principal" "test" {
application_id = "${azurerm_azuread_application.test.application_id}"
}
resource "azurerm_azuread_service_principal_password" "test" {
service_principal_id = "${azurerm_azuread_service_principal.test.id}"
value = "CHANGE_THIS_PASSWORD"
end_date = "2020-01-01T01:02:03Z"
}
resource azurerm_network_security_group "test_advanced_network" {
name = "akc-1-nsg"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
}
resource "azurerm_virtual_network" "test_advanced_network" {
name = "akc-1-vnet"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
address_space = ["10.1.0.0/16"]
}
resource "azurerm_subnet" "test_subnet" {
name = "akc-1-subnet"
resource_group_name = "${azurerm_resource_group.test.name}"
network_security_group_id = "${azurerm_network_security_group.test_advanced_network.id}"
address_prefix = "10.1.0.0/24"
virtual_network_name = "${azurerm_virtual_network.test_advanced_network.name}"
}
resource "azurerm_kubernetes_cluster" "test" {
name = "akc-1"
location = "${azurerm_resource_group.test.location}"
dns_prefix = "akc-1"
resource_group_name = "${azurerm_resource_group.test.name}"
linux_profile {
admin_username = "acctestuser1"
ssh_key {
key_data = "ssh-rsa ..."
}
}
agent_pool_profile {
name = "agentpool"
count = "2"
vm_size = "Standard_DS2_v2"
os_type = "Linux"
# *NOTE* Uncomment the following on second run to recreate the issue.
#vnet_subnet_id = "${azurerm_subnet.test_subnet.id}"
}
service_principal {
client_id = "${azurerm_azuread_service_principal.test.application_id}"
client_secret = "${azurerm_azuread_service_principal_password.test.value}"
}
# *NOTE* Uncomment the following on second run to recreate the issue.
#network_profile {
# network_plugin = "azure"
#}
}
Validation introduced in #1723 should not fail when converting an existing azurerm_kubernetes_cluster from network_plugin type of kubenet to azure.
$ terraformw plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
azurerm_resource_group.test: Refreshing state... (ID: /subscriptions/XXXXXX/resourceGroups/acctestRG1)
azurerm_azuread_application.test: Refreshing state... (ID: YYYYY)
azurerm_network_security_group.test_advanced_network: Refreshing state... (ID: /subscriptions/XXXXXX/networkSecurityGroups/akc-1-nsg)
azurerm_virtual_network.test_advanced_network: Refreshing state... (ID: /subscriptions/XXXXXX/virtualNetworks/akc-1-vnet)
azurerm_azuread_service_principal.test: Refreshing state... (ID: YYYYY)
azurerm_azuread_service_principal_password.test: Refreshing state... (ID: YYYYY)
azurerm_subnet.test_subnet: Refreshing state... (ID: /subscriptions/XXXXXX/akc-1-vnet/subnets/akc-1-subnet)
azurerm_kubernetes_cluster.test: Refreshing state... (ID: /subscriptions/XXXXXX/managedClusters/akc-1)
------------------------------------------------------------------------
Error: Error running plan: 1 error(s) occurred:
* azurerm_kubernetes_cluster.test: 1 error(s) occurred:
* azurerm_kubernetes_cluster.test: The `pod_cidr` field in the `network_profile` block can not be specified when `network_plugin` is set to `azure`. Please remove `pod_cidr` or set `network_plugin` to `kubenet`.
terraform applykubenet to azure network_plugin type.terraform applyIf you don't set any network plugin, by default, AKS will choose kubenet. The response returned is:
"networkProfile": {
"networkPlugin": "kubenet",
"podCidr": "10.244.0.0/16",
"serviceCidr": "10.0.0.0/16",
"dnsServiceIP": "10.0.0.10",
"dockerBridgeCidr": "172.17.0.1/16"
}
This is also stored in state file.
Change network_plugin to azure will trigger the validation error defined in CustomizeDiff. We probably need to remove that validation.
AKS team add the validation logic on their side(https://github.com/Azure/acs-engine/pull/3562) to make sure that it will reject non-empty pod_cidr when network_plugin is azure.
Fixed via #1783 which will be released as part of the next version of the AzureRM Provider
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
Fixed via #1783 which will be released as part of the next version of the AzureRM Provider