Terraform-provider-google: google_monitoring_alert_policy filter invalid value must specify a restriction on "resource.type"

Created on 5 Aug 2019  ·  9Comments  ·  Source: hashicorp/terraform-provider-google


Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

terraform -v
Terraform v0.11.14

  • provider.aws v2.21.1
  • provider.google v2.11.0
  • provider.google-beta v2.11.0
  • provider.helm v0.10.0
  • provider.kubernetes v1.8.1
  • provider.local v1.3.0
  • provider.template v2.1.2

Affected Resource(s)

google_monitoring_alert_policy

Terraform Configuration Files

resource "google_monitoring_alert_policy" "logging_bytes_alert_policy" {
  project = "${var.project}"
  display_name = "Logging Bytes"
  enabled = true
  combiner = "OR"
  notification_channels = [
    "${var.notificationChannels}"
  ]
  conditions {
    display_name = "Logging Bytes Condition"
    condition_threshold {
      threshold_value = "${var.threshold_value_bytes}"
      filter = "metric.type=\"logging.googleapis.com/byte_count\""
      duration = "600s"
      comparison = "COMPARISON_GT"
      trigger {
        count = 1
      }
      aggregations {
        alignment_period = "60s"
        per_series_aligner = "ALIGN_RATE"
        cross_series_reducer = "REDUCE_SUM"
        group_by_fields = [
          "metric.label.log"
        ]
      }
    }
  }
}

Debug Output

Expected Behavior

In GCP Monitoring I can create this alert without the resource.type. I exported the JSON from the console:

{
  "combiner": "OR",
  "conditions": [
    {
      "conditionThreshold": {
        "aggregations": [
          {
            "alignmentPeriod": "60s",
            "groupByFields": [
              "metric.label.log"
            ],
            "perSeriesAligner": "ALIGN_RATE"
          }
        ],
        "comparison": "COMPARISON_GT",
        "duration": "600s",
        "filter": "metric.type=\"logging.googleapis.com/log_entry_count\"",
        "thresholdValue": 400,
        "trigger": {
          "count": 1
        }
      },
      "displayName": "Logging Entries Condition"
    }
  ],
  "displayName": "Logging Entries",
  "enabled": true,
  "incidentStrategy": {},
  "notificationChannels": [
xxx
  ]
}

Actual Behavior

2019/08/05 14:45:56 [ERROR] root.logging_alerts: eval: *terraform.EvalApplyPost, err: 1 error occurred:
    * google_monitoring_alert_policy.logging_bytes_alert_policy: Error updating AlertPolicy "projects/nuorder-staging/alertPolicies/2612755766349xxxxxx": googleapi: Error 400: Field alert_policy.conditions[0].condition_threshold.filter had an invalid value of "metric.type="logging.googleapis.com/byte_count"": must specify a restriction on "resource.type" in the filter; see "https://cloud.google.com/monitoring/api/resources" for a list of available resource types.

Steps to Reproduce

  1. terraform apply

Important Factoids

References

  • #0000
bug

All 9 comments

Try adding

AND resource.type=\"metric\"

to your filter.

Something like this

filter = "metric.type=\"logging.googleapis.com/byte_count\" AND resource.type=\"metric\"",

It appears you're using different filter values? The first example is filter = "metric.type=\"logging.googleapis.com/byte_count\"" vs "filter": "metric.type=\"logging.googleapis.com/log_entry_count\"" in the second. Can you try an identical filter value in each?

Terraform configuration files

resource "google_monitoring_alert_policy" "alert_policy" {
  display_name = "My Alert Policy"
  combiner = "OR"
  conditions {
    display_name = "test condition"
    condition_threshold {
      filter = "metric.type=\"logging.googleapis.com/user/<user_created_log_based_metric>\" "
      duration = "60s"
      comparison = "COMPARISON_GT"
      aggregations {
        alignment_period = "60s"
        per_series_aligner = "ALIGN_RATE"
      }
    }
  }

  user_labels = {
    foo = "bar"
  }
}

facing the same problem when using a user created log-based metric, using
logging.googleapis.com/user/<log-based-metric-name>

Error reported:
Error updating AlertPolicy "projects/<>/alertPolicies/<>": googleapi: Error 400: Field alert_policy.conditions[0].condition_threshold.filter had an invalid value of "metric.type="logging.googleapis.com/user/<>"": must specify a restriction on "resource.type" in the filter; see "https://cloud.google.com/monitoring/api/resources" for a list of available resource types.

Resource type for log-based metrics is not a required filter as far as GCP is concerned. However, leaving the resource type out can impact performance.

However, it can be a tricky thing to get right the resource type the first time as , "The list of monitored resource types in Monitoring is not presently the same as the list of monitored resource types in Logging."

This is according to this document:
https://cloud.google.com/monitoring/api/resources

It would be SUPER helpful to be able to create google_monitoring_alert_policy without having to first specify the resource type.

It appears you're using different filter values? The first example is filter = "metric.type=\"logging.googleapis.com/byte_count\"" vs "filter": "metric.type=\"logging.googleapis.com/log_entry_count\"" in the second. Can you try an identical filter value in each?

@rileykarson I can confirm that log_entry_count works without resource but byte_count fails. Adding resource.type=\"gke_container\" to the filter fixes it. The resource is NOT required in Stack driver metric explorer in either case.

I'm fine closing this as I can track down the resource type - but it remains strange behavior.

That's surprising, that it works there but not here. We're hitting the API directly, so I wonder if they provide a default value like metric as in https://github.com/terraform-providers/terraform-provider-google/issues/4165#issuecomment-519098176.

I'm going to close this out because I think we're sending the right request and hitting an API limitation. If you're able to find the exact filter value the console returns (such as through the Api Explorer) and it's different than you expect, getting closer to that is something we could look into Terraform doing.

I encountered the exact same issue. I tried comparing the updates of the resource in both Terraform and in the UI.

Terraform uses PATCH https://monitoring.googleapis.com/v3/projects/<PROJECT>/alertPolicies/<ID> endpoint to do the update.

The UI uses PATCH https://monitoring.clients6.google.com/v3/projects/<PROJECT>/alertPolicies/<ID>?key=<REDACTED> instead.

PATCH info
PATCH Body:

{
  "combiner": "OR",
  "conditions": [
    {
      "conditionThreshold": {
        "aggregations": [
          {
            "alignmentPeriod": "60s",
            "crossSeriesReducer": "REDUCE_NONE",
            "groupByFields": [],
            "perSeriesAligner": "ALIGN_RATE"
          }
        ],
        "comparison": "COMPARISON_GT",
        "duration": "60s",
        "filter": "metric.type=\"<REDACTED>\" project=\"<REDACTED>\"",
        "thresholdValue": 0.001,
        "trigger": {
          "count": 1
        }
      },
      "displayName": "<REDACTED>",
      "name": "projects/<PROJECT>/alertPolicies/<REDACTED>/conditions/<REDACTED>"
    }
  ],
  "disabled": false,
  "displayName": "<REDACTED>",
  "enabled": true,
  "incidentStrategy": {},
  "name": "projects/<PROJECT>/alertPolicies/<ID>",
  "notificationChannels": [],
  "userLabels": {}
}

The UI uses a different set of undocumented APIs that treats the input differently. I will raise a support case with GCP and see what they say.

After a frustrating back and forth with GCP support, they pointed me to this issue: https://issuetracker.google.com/issues/143436657

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings