Terraform v0.11.7
provider.azurerm: version = "~> 1.19"
azurerm_monitor_diagnostic_setting
resource "azurerm_monitor_diagnostic_setting" "aks-control-plane-logs" {
name = "aks-control-plane-logs"
target_resource_id = "${azurerm_kubernetes_cluster.aks-cluster.id}"
log_analytics_workspace_id = "${azurerm_log_analytics_workspace.aks-logs.id}"
log {
category = "kube-apiserver"
retention_policy {
enabled = true
days = 7
}
}
}
After applying config, should generate a clean/no-changes plan.
$ terraform apply
[...]
$ terraform plan
[...]
Plan: 0 to add, 0 to change, 0 to destroy.
# Should be up to date
The state contains all available categories (although disabled) and metrics, so plan wants to remove them.
$ terraform apply
[...]
$ terraform plan
[...]
Terraform will perform the following actions:
~ module.aks.azurerm_monitor_diagnostic_setting.aks-control-plane-logs
log.#: "5" => "1"
log.2084327955.category: "guard" => ""
log.2084327955.enabled: "false" => "false"
log.2084327955.retention_policy.#: "1" => "0"
log.2084327955.retention_policy.0.days: "0" => "0"
log.2084327955.retention_policy.0.enabled: "false" => "false"
log.2874178000.category: "cluster-autoscaler" => ""
log.2874178000.enabled: "false" => "false"
log.2874178000.retention_policy.#: "1" => "0"
log.2874178000.retention_policy.0.days: "0" => "0"
log.2874178000.retention_policy.0.enabled: "false" => "false"
log.3584060954.category: "kube-scheduler" => ""
log.3584060954.enabled: "false" => "false"
log.3584060954.retention_policy.#: "1" => "0"
log.3584060954.retention_policy.0.days: "0" => "0"
log.3584060954.retention_policy.0.enabled: "false" => "false"
log.3783560494.category: "kube-controller-manager" => ""
log.3783560494.enabled: "false" => "false"
log.3783560494.retention_policy.#: "1" => "0"
log.3783560494.retention_policy.0.days: "0" => "0"
log.3783560494.retention_policy.0.enabled: "false" => "false"
log.556665303.category: "kube-apiserver" => "kube-apiserver"
log.556665303.enabled: "true" => "true"
log.556665303.retention_policy.#: "1" => "1"
log.556665303.retention_policy.0.days: "7" => "7"
log.556665303.retention_policy.0.enabled: "true" => "true"
metric.#: "1" => "0"
metric.4109484471.category: "AllMetrics" => ""
metric.4109484471.enabled: "false" => "false"
metric.4109484471.retention_policy.#: "1" => "0"
metric.4109484471.retention_policy.0.days: "0" => "0"
metric.4109484471.retention_policy.0.enabled: "false" => "false"
Plan: 0 to add, 1 to change, 0 to destroy.
# Should be up to date
terraform applyterraform planThis is still an issue in Terraform 0.11.13 and azurerm 1.24.0
This is still an issue in Terraform 0.11.13 and azurerm 1.27.1
I still get this error with terraform 0.12.6 and azurerm provider 1.32.1
log {
category = "ApplicationGatewayAccessLog"
enabled = true
retention_policy {
days = 0
enabled = false
}
}
log {
category = "ApplicationGatewayFirewallLog"
enabled = true
retention_policy {
days = 0
enabled = false
}
}
log {
category = "ApplicationGatewayPerformanceLog"
enabled = true
retention_policy {
days = 0
enabled = false
}
}
- metric {
- category = "AllMetrics" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
"metric" is not enabled in the Azure diagnostic settings for this service, but terraform wants to remove it on each apply run
Is this being looked into?
This is what I'm doing so we can send metrics to Log Analytics and logs to a storage account. Each run/plan shows the other overwriting but it appears they're both taking/keeping:
locals {
resource_id = azurerm_cosmosdb_account.default.id
}
# https://www.terraform.io/docs/providers/azurerm/d/monitor_diagnostic_categories.html
data "azurerm_monitor_diagnostic_categories" "default" {
resource_id = local.resource_id
}
# https://www.terraform.io/docs/providers/azurerm/r/monitor_diagnostic_setting.html
resource "azurerm_monitor_diagnostic_setting" "default_metrics" {
name = "metrics"
target_resource_id = local.resource_id
log_analytics_workspace_id = var.log_analytics_workspace_id
# log_analytics_destination_type = "Dedicated"
dynamic metric {
for_each = sort(data.azurerm_monitor_diagnostic_categories.default.metrics)
content {
category = metric.value
enabled = true
retention_policy {
enabled = true
days = 180
}
}
}
}
# https://www.terraform.io/docs/providers/azurerm/r/monitor_diagnostic_setting.html
resource "azurerm_monitor_diagnostic_setting" "default_logs" {
name = "logs"
target_resource_id = local.resource_id
storage_account_id = var.storage_account_id
dynamic log {
for_each = sort(data.azurerm_monitor_diagnostic_categories.default.logs)
content {
category = log.value
enabled = true
retention_policy {
enabled = true
days = 180
}
}
}
}
Output:
...
- log {
- category = "PartitionKeyStatistics" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "QueryRuntimeStatistics" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
metric {
category = "Requests"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
...
It would also be nice to only have 1 resource block and be able to direct each type of diagnostic setting to different storage layers (e.g. logs to blob and metrics to log analytics). We're doing this to reduce cost of storing verbose logs to Log Analytics, keeping them still accessible in colder storage.
If you look at the full output of the plan, you'll notice it's scoping the id to metrics or logs, and it shows removing the other type. Both of these have the same target_resource_id, but the id is different, appended with |metrics or |logs and it shows removing the other type from each.
However, it appears it's creating each just fine and not removing either from the actual resources. It just makes for some very verbose plans that show changes every time.
You can see here the first id|logs shows removing the metrics, and the second id|metrics shows removing the logs.
# module.namespace.azurerm_monitor_diagnostic_setting.default_logs will be updated in-place
~ resource "azurerm_monitor_diagnostic_setting" "default_logs" {
id = "/subscriptions/my-subscription_id/resourceGroups/my-rg/providers/Microsoft.EventHub/namespaces/my-eh|logs"
name = "logs"
storage_account_id = "/subscriptions/my-subscription_id/resourceGroups/my-rg/providers/Microsoft.Storage/storageAccounts/mystorageacc"
target_resource_id = "/subscriptions/my-subscription_id/resourceGroups/my-rg/providers/Microsoft.EventHub/namespaces/my-eh"
log {
category = "ArchiveLogs"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
log {
category = "AutoScaleLogs"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
log {
category = "CustomerManagedKeyUserLogs"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
log {
category = "EventHubVNetConnectionEvent"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
log {
category = "KafkaCoordinatorLogs"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
log {
category = "KafkaUserErrorLogs"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
log {
category = "OperationalLogs"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
- metric {
- category = "AllMetrics" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
}
# module.namespace.azurerm_monitor_diagnostic_setting.default_metrics will be updated in-place
~ resource "azurerm_monitor_diagnostic_setting" "default_metrics" {
id = "/subscriptions/my-subscription_id/resourceGroups/my-rg/providers/Microsoft.EventHub/namespaces/my-eh|metrics"
log_analytics_workspace_id = "/subscriptions/my-subscription_id/resourcegroups/my-rg/providers/microsoft.operationalinsights/workspaces/my-log-analytics"
name = "metrics"
target_resource_id = "/subscriptions/my-subscription_id/resourceGroups/my-rg/providers/Microsoft.EventHub/namespaces/my-eh"
- log {
- category = "ArchiveLogs" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "AutoScaleLogs" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "CustomerManagedKeyUserLogs" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "EventHubVNetConnectionEvent" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "KafkaCoordinatorLogs" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "KafkaUserErrorLogs" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
- log {
- category = "OperationalLogs" -> null
- enabled = false -> null
- retention_policy {
- days = 0 -> null
- enabled = false -> null
}
}
metric {
category = "AllMetrics"
enabled = true
retention_policy {
days = 180
enabled = true
}
}
}
Hey Guys! Thank you for reporting this :+1:
This is seems duplicated to #7235, where I have put some explaination of this issue and a linked PR trying to suppress this annoying diff.
In order to better track this issue, hope you don't mind I close this issue in favor of #7235. You can also subscribe that issue for any update!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
If you look at the full output of the plan, you'll notice it's scoping the
idtometricsorlogs, and it shows removing the other type. Both of these have the sametarget_resource_id, but theidis different, appended with|metricsor|logsand it shows removing the other type from each.However, it appears it's creating each just fine and not removing either from the actual resources. It just makes for some very verbose plans that show changes every time.
You can see here the first
id|logsshows removing the metrics, and the secondid|metricsshows removing the logs.