Terraform v0.11.3
+ provider.azurerm v1.1.2
resource "azurerm_kubernetes_cluster" "demo" {
name = "demo"
location = "${azurerm_resource_group.demo.location}"
resource_group_name = "${azurerm_resource_group.demo.name}"
linux_profile {
admin_username = "demo-aks-user1"
ssh_key {
key_data = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCeoqNrR2+tOkGRHhH/Ur7Jm3EIpAI8wIqdGj+qE2GyfTonsQ9WBMASbWRt3SXMRZtNxx2Bf5JK4zxrE8DBVgWKIQiRMLjSvOK8yliBdaCK5cUdXYYujLzqrjSD01VsB85f6feWXYkscQvNaiLDgRXXwmPqtgfiXyMeReX2QVMHzw6aocsLqBEwWphLYZKqr+d0FNIZtZ9QbaacBGsx+8QuTf4YTPQY7cc/6daqS6Md/1BNxI9RhlvCUxxZMG0NaeARs0y5NIBquOuokH7o50dOpOhocYl2bmlaNVE14rxOxMgmoAUx4CR+z3LlYUsxJxu3BSquUm9BlscM1X28X6e9"
}
}
agent_pool_profile {
name = "demo"
count = 1
dns_prefix = "demoagent1"
vm_size = "Standard_A2_v2"
storage_profile = "ManagedDisks"
os_type = "Linux"
}
service_principal {
client_id = "${var.aks_sp_client_id}"
client_secret = "${var.aks_sp_client_secret}"
}
tags {
Environment = "demo"
}
}
https://gist.github.com/yasn77/3dfa6505846f9388303543d7ae3811cf
Resource copied pretty much as is from documentation fails to validate.
When running plan or validate, Terraform errors with the following:
Error: azurerm_kubernetes_cluster.demo: "agent_pool_profile.0.dns_prefix": this field cannot be set
Error: azurerm_kubernetes_cluster.demo: agent_pool_profile.0: invalid or unknown key: storage_profile
Please list the steps required to reproduce the issue, for example:
terraform validate .Pretty much copy the documentation example as is
Same here
I hit the same issue and appears to be because the dns_prefix property is in the wrong place.
I've moved it up a level and it removes the error and deployed a cluster successfully. Also removed the storage_profile property. @sozercan what is the best way to update the doc? Should I submit a PR?
My config looks as follows:
``` tf
resource "azurerm_resource_group" "test" {
name = "lg-terraformtest"
location = "Central US"
}
resource "azurerm_kubernetes_cluster" "test" {
name = "lg-testcluter"
location = "${azurerm_resource_group.test.location}"
resource_group_name = "${azurerm_resource_group.test.name}"
kubernetes_version = "1.8.2"
linux_profile {
admin_username = "azureuser"
ssh_key {
key_data = "ssh-rsa asdfasdfasdf"
}
}
dns_prefix = "acctestagent1"
agent_pool_profile {
name = "default"
count = 1
vm_size = "Standard_A0"
os_type = "Linux"
}
service_principal {
client_id = "nearly"
client_secret = "postedthis!"
}
tags {
Environment = "Production"
}
}
```
@lawrencegripper Thanks, that worked for me.
Guess it's just the documentation that needs to be updated then.
@lawrencegripper thanks! can you remove storage_profile from agent_pool_profile description below too?
@sozercan just noticed this when reviewing #862 - I've opened #867 which includes a fix for this
Since the docs have been updated, this issue is not really an issue anymore :smile_cat: So closing...
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
@sozercan just noticed this when reviewing #862 - I've opened #867 which includes a fix for this