Terraform-provider-aws: aws_cloudwatch_log_group Error when an instance exists

Created on 31 Jan 2019  ยท  8Comments  ยท  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @k7faq as hashicorp/terraform#20175. It was migrated here as a result of the provider split. The original body of the issue is below._


Terraform Version

Terraform v0.11.11
+ provider.aws v1.57.0
+ provider.null v2.0.0

Terraform Configuration Files

...

Debug Output

Error: Error applying plan:

2 error(s) occurred:

* module.vpc_flow_logs.aws_iam_role.flow_logs: 1 error(s) occurred:

2019-01-31T12:13:43.200-0700 [DEBUG] plugin.terraform-provider-aws_v1.57.0_x4: 2019/01/31 12:13:43 [ERR] plugin: plugin server: accept unix /var/folders/tv/2w5pd_dn4wl354bd9hzxcrgr0000gn/T/plugin052160372: use of closed network connection
2019-01-31T12:13:43.200-0700 [DEBUG] plugin.terraform-provider-null_v2.0.0_x4: 2019/01/31 12:13:43 [ERR] plugin: plugin server: accept unix /var/folders/tv/2w5pd_dn4wl354bd9hzxcrgr0000gn/T/plugin661231433: use of closed network connection
* aws_iam_role.flow_logs: Error creating IAM Role AmazonVPCFlowLogs: EntityAlreadyExists: Role with name AmazonVPCFlowLogs already exists.
    status code: 409, request id: 531ba364-258c-11e9-9ec7-7778711760e6
* module.vpc_flow_logs.aws_cloudwatch_log_group.vpc: 1 error(s) occurred:

* aws_cloudwatch_log_group.vpc: Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: The specified log group already exists
2019-01-31T12:13:43.201-0700 [DEBUG] plugin: plugin process exited: path=/Users/stevenrhodes/Documents/Projects/working/terraform-module-templates/.terraform/plugins/darwin_amd64/terraform-provider-null_v2.0.0_x4
    status code: 400, request id: 531c1896-####-11e9-802f-ed9852452c54:  The CloudWatch Log Group 've-redacted-w-logs' already exists.

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.


2019-01-31T12:13:43.203-0700 [DEBUG] plugin: plugin process exited: path=/Users/stevenrhodes/Documents/Projects/working/terraform-module-templates/.terraform/plugins/darwin_amd64/terraform-provider-aws_v1.57.0_x4

Crash Output

Expected Behavior


One would expect that Terraform would acknowledge the existence of a Cloud Watch Group and return the necessary identifiers to move on. Not just fail.

Actual Behavior


Received noted Error and processing halted.

Steps to Reproduce

  • terraform init
  • terraform apply
  • Additional Context

    References

    serviccloudwatch

    Most helpful comment

    Still occurring on aws plugin version 2.6.0

    All 8 comments

    I've run into this issue as well. Any news on a fix/remediation?

    Still occurring on aws plugin version 2.6.0

    I am running into this issue as well

    I saw this today in the aws provider version 2.23.0. I was able to get around it by importing then planning again

    terraform import module.project.module.service-foo-lambda.aws_cloudwatch_log_group.this "/aws/lambda/service-foo-lambda-bar"
    

    I mitigated this behaviour by following the pattern in the documentation example here: https://www.terraform.io/docs/providers/aws/r/eks_cluster.html

    If you have existing infrastructure, use @dustinmoorman's suggestion to import the log group into the Terraform state.

    If you're starting from a blank slate, ensure that Terraform creates the log group _before_ any other infrastructure that would start writing logs to that group, by using the depends_on field:

    resource "aws_eks_cluster" "an_eks_cluster" {
        depends_on = [aws_cloudwatch_log_group.a_log_group]
        ...
    }
    
    resource "aws_cloudwatch_log_group" "a_log_group" {
        ...
    }
    

    Combining the above should mean that tearing everything down with terraform destroy will remove the log group as well, so when you bring everything back up there will be no conflicting log group from either a previous deploy or from your other infrastructure silently introducing a new group before your one has been explicitly created.

    Hi folks, since the issue @k7faq reported doesn't have an accompanying config file, and it looks like there's some good examples of how to handle the problem in this thread, I'm going to close it. If there's still something that doesn't work as expected, please file a new bug report and make sure to include all the requested details. Thanks!

    I ran into this issue. I agree with @alex that if it is new resource then depends_on can handle this. But for existing resources terraform import is cumbersome when log group names are created dynamically. In my scenario I am creating RDS log groups (AWS creates log groups for RDS only if logs are exported) as I want to associate log subscription filter to it. If for some reason any log group already exists then script fails. I thought of checking log group name using data sources but that also complains if log group name do not exists. Is there anyway I can check if log group name already exists without failing if it doesn't exists?

    Here is my tf code:
    `
    locals {
    rds_engine_loggrp = {
    "mysql" = ["/aws/rds/instance/%s/error", "/aws/rds/instance/%s/audit"]
    "oracle" = ["/aws/rds/instance/%s/alert", "/aws/rds/instance/%s/audit"]
    "sqlserver" = ["/aws/rds/instance/%s/error"]
    "mariadb" = ["/aws/rds/instance/%s/error", "/aws/rds/instance/%s/audit"]
    "postgres" = ["/aws/rds/instance/%s/postgresql"]
    }

    #### TODO: Put more restrictive filters for each engine. Combine filters for all the groups into one string for each engine
    rds_loggrp_filter = {
    "mysql" = ""
    "oracle" = ""
    "sqlserver" = ""
    "mariadb" = ""
    "postgres" = ""
    }
    }

    data "aws_db_instance" "db_instance" {
    for_each = toset(var.db_instance_identifier)
    db_instance_identifier = each.value
    }

    locals {
    grp_list = flatten([
    for id in var.db_instance_identifier: # iterate for each db identifier. This will return list of groups as mentioned in rds_engine_loggrp
    [ for g in lookup(local.rds_engine_loggrp, split("-", data.aws_db_instance.db_instance[id].engine)[0]): # then iterate for each log group within identifier and replace id in log grp name
    format(g, id)]
    if var.splunk_flowlogs == "1" # create resources only if splunk integration flag is enabled
    ]) ## output of this is a list of list so flatten out
    }

    resource "aws_cloudwatch_log_group" "rds_cw_grp" {
    for_each = toset(local.grp_list)
    name = each.value

    tags = {
    "Name" = "RDS-CW-Logs"
    "fico:common:owner" = "AWS Cloud Engineering"
    }
    }

    resource "aws_cloudwatch_log_subscription_filter" "rds-cwlogs-to-shared-services-kinesis" {
    for_each = toset(local.grp_list)
    name = format ("%s-GTS-CloudWatchLogsFilter", var.account-id) # account id will be use to distinguish accounts for same source of logs in splunk
    log_group_name = each.value
    filter_pattern = lookup(local.rds_loggrp_filter, split("-", data.aws_db_instance.db_instance[split("/", each.value)[4]].engine)[0])
    destination_arn = "arn:aws:logs:${var.region}:${shared_account_id}:destination:cw_logs_to_firehose_destination_${var.ss_environment}"
    distribution = "Random"
    depends_on = [aws_cloudwatch_log_group.rds_cw_grp] # let the group gets created first else filter fails
    }
    `

    I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

    If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

    Was this page helpful?
    0 / 5 - 0 ratings