Terraform v0.11.11
provider.azurerm = 1.22.0
azurerm_kubernetes_clusterresource "azurerm_kubernetes_cluster" "xxx" {
name = "${var.kubernetes_xxx_cluster_name}-${var.environment}"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
dns_prefix = "${var.kubernetes_xxx_cluster_name}-${var.environment}"
kubernetes_version = "1.12.4"
linux_profile {
admin_username = "xxx"
ssh_key {
key_data = "${var.kubernetes_ssh_pub_key}"
}
}
agent_pool_profile {
name = "default"
count = "${var.kubernetes_xxx_agent_count}"
vm_size = "${var.kubernetes_xxx_agent_size}"
os_type = "Linux"
os_disk_size_gb = 30
vnet_subnet_id = "${azurerm_subnet.k8s_xxx.id}"
}
service_principal {
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
}
network_profile {
network_plugin = "azure"
}
tags {
Environment = "Live"
}
}
When changing the linux_profile.ssh_key the administrators public key should be updated in the node profile, and ideally, on existing nodes. This should be an update in place operation, similar to updating the keys on a standard Azure VM.
Terraform wants to create a new cluster and destroy the old one.
Editing the linux_profile section in the .tfstate file, or removing and re-importing the resource does not work-around this problem, since the state is always recreated from the cluster (it must be stored somewhere in Azure).
linux_profile section, with ssh_key A. (see example above).terraform applylinux_profile.ssh_key to a new key B.terraform applyhi @ellispritchard
Thanks for opening this issue :)
As far as I'm aware it's not possible to do this through the AKS API at this time (which is why Terraform requires this to force the recreation of the cluster here) - are you aware if this is possible through the Azure CLI (or another mechanism)?
Thanks!
There is a documented AKS cluster Create/Update REST endpoint, which contains the linux profile section: https://docs.microsoft.com/en-us/rest/api/aks/managedclusters/createorupdate#containerservicelinuxprofile
I don't know what happens when you try to use it for update (no detailed documentation), so would have to experiment.
Otherwise, each existing VM in the cluster is simply an Azure VM, within the special nodeResourceGroup resource group listed by e.g. the az aks show command, so the linux agent could be used to update the keys, though I'm less bothered about automating that, since its easy to delete keys on existing VMs: it's more the stickiness of the linux profile in the cluster config that concerns me.
It seems like one should be able to add SSH keys to an existing cluster. Might be worth investigating if this approach is applicable here as well.
hi @ellispritchard @evenh
Whilst that approach may work for Virtual Machines provisioned via an Availability Set - that's been superseded by VirtualMachineScaleSets which pull their data from another source, as such to be able to implement this we'd need the AKS API itself to support rotating the SSH Keys in the same way it does for Rotating the active_directory and service_principal block (at the time of writing we only support rotating the Service Principal in Terraform today).
As this functionality isn't available in the Azure API I'm going to suggest opening an issue on the AKS Repository where the AKS Team should be able to take a look into this - and once this is available there we should be able to circle around and take a look into adding support for this - however since this requires new functionality to be exposed in the AKS API I'm going to close this issue at the moment.
Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
It seems like one should be able to add SSH keys to an existing cluster. Might be worth investigating if this approach is applicable here as well.