version = "=1.36.1"
azurerm_kubernetes_cluster
N/A, any configuration can be an example.
N/A
N/A
Every sensitive data will be hidden during plan step. Things like:
It works correctly for "kube_config_raw" ( listed as (sensitive value) ).
All sensitive data is printed in plain text.
terraform plan
N/A
N/A
@hajdukd This issue comes from a bug in the Terraform Plugin SDK. For this reason it is currently not possible to mask parts of nested blocks. One option would be to mark the whole kube_config block as sensitive. But to me this sounds like a workaround for the actual problem.
Additionally attributes like client_certificate, cluster_ca_certificate and agent_pool_profile.linux_profile.ssh_key.key_data do not need to be sensitive imo. These are public certificates that shouldn't lead to any damage when being exposed.
Quick workaround is to omit kube_config sensitive information using egrep:
terraform plan | egrep -v 'client_key|password|client_certificate|cluster_ca_certificate'
One option would be to mark the whole
kube_configblock as sensitive. But to me this sounds like a workaround for the actual problem.Additionally attributes like
client_certificate,cluster_ca_certificateandagent_pool_profile.linux_profile.ssh_key.key_datado not need to be sensitive imo. These are public certificates that shouldn't lead to any damage when being exposed.
@brennerm In the short term could we mark the whole kube_admin_config as sensitive, until terraform gains the ability to mask parts of a nested blocks?
The parts that are needed can still be referenced normally, passed around as outputs, and fetched via terraform_remote_state, so it's no inconvenience there; but having admin credentials show up in build logs is genuinely catastrophic. Especially because it shows up when the plan has an aks update, not when it is created or unchanged. It's very easy to start using this resource as it stands without knowing you're leaking admin credentials until later.
Fortunately we noticed this happens on clusters which don't have production workloads, which would have been a major incident. As it is we still need to rotate all of the credentials.
In a nutshell, I just can't emphasize enough how strongly the possibility of leaking cluster-admin should outweigh the inconvenience of redacting a few other public properties until the related terraform bug is fixed.
This also affects app_serivce and the site_credential.password property.
This issue is causing us to leak app service deployment credentials into our deployment logs (we use Octopus Deploy) which are not considered safe storage for secrets by our infosec team. I know Octopus Deploy had to publish a CVE when they had similar bug last year.
As an important security issue please fix this ASAP.
๐
Since this issue needs to be fixed in the Terraform Plugin SDK rather than tracking this issue in multiple places I'm going to close this issue in favour of the upstream issue. Once that's been fixed we'll update the version of the Plugin SDK being used and this should get resolved - as such please subscribe to the upstream issue for updates.
Thanks!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error ๐ค ๐ , please reach out to my human friends ๐ [email protected]. Thanks!
Most helpful comment
@brennerm In the short term could we mark the whole
kube_admin_configas sensitive, until terraform gains the ability to mask parts of a nested blocks?The parts that are needed can still be referenced normally, passed around as outputs, and fetched via
terraform_remote_state, so it's no inconvenience there; but having admin credentials show up in build logs is genuinely catastrophic. Especially because it shows up when the plan has an aks update, not when it is created or unchanged. It's very easy to start using this resource as it stands without knowing you're leaking admin credentials until later.Fortunately we noticed this happens on clusters which don't have production workloads, which would have been a major incident. As it is we still need to rotate all of the credentials.
In a nutshell, I just can't emphasize enough how strongly the possibility of leaking cluster-admin should outweigh the inconvenience of redacting a few other public properties until the related terraform bug is fixed.