Terraform-provider-aws: S3 bucket Error: insufficient items for attribute "destination"; must have at least 1

Created on 19 Jun 2019  路  10Comments  路  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

I have two buckets and each one has a replica. I have imported these buckets into a state. Now when I issue terraform plan to update buckets I get the mentioned error.
The error message doesn't make sense. The reported line of error changes.
I don't know what is wrong, but when I remove the bucket the error mentions, plan is generated succesfully. But the configuration for the other bucket is just copy and paste of the other one.

Terraform Version

Terraform v0.12.2

  • provider.aws v2.15.0

Affected Resource(s)

  • aws_s3_bucket

Terraform Configuration Files

provider "aws" {
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "prod"
  region                  = "eu-west-1"
}

provider "aws" {
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "prod"
  alias                   = "us"
  region                  = "us-east-1"
}

terraform {
  backend "s3" {
    bucket = "ps-terraform-state-ca770e80-f59b-4281-a74c-00c98ab14017"
    key    = "prod/backups.tf"
    region = "eu-central-1"
  }
}



resource "aws_iam_role" "ps-db-backups-replication" {
  name = "ps-db-backups-replication"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "s3.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
POLICY
}

resource "aws_iam_policy" "ps-db-backups-replication" {
  name = "ps-db-backups-replication"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:GetReplicationConfiguration",
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "${aws_s3_bucket.ps-db-backups.arn}"
      ]
    },
    {
      "Action": [
        "s3:GetObjectVersion",
        "s3:GetObjectVersionAcl"
      ],
      "Effect": "Allow",
      "Resource": [
        "${aws_s3_bucket.ps-db-backups.arn}/*"
      ]
    },
    {
      "Action": [
        "s3:ReplicateObject",
        "s3:ReplicateDelete"
      ],
      "Effect": "Allow",
      "Resource": "${aws_s3_bucket.ps-db-backups-replica.arn}/*"
    }
  ]
}
POLICY
}

resource "aws_iam_policy_attachment" "ps-db-backups-replication" {
  name       = "ps-db-backups-replication"
  roles      = ["${aws_iam_role.ps-db-backups-replication.name}"]
  policy_arn = "${aws_iam_policy.ps-db-backups-replication.arn}"
}

resource "aws_s3_bucket" "ps-db-backups-replica" {
  bucket = "ps-db-backups-replica-ec8d82b8-8e47-44ed-90f4-73dfc999fac4"
  acl    = "private"
  region = "us-east-1"
  provider = "aws.us"

server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm     = "AES256"
      }
    }
  }
}

resource "aws_s3_bucket" "ps-db-backups" {
  bucket = "ps-db-backups-b3bd1643-8cbf-4927-a64a-f0cf9b58dfab"
  acl    = "private"
  region = "eu-west-1"

  versioning {
    enabled = true
  }

  lifecycle_rule {
    id      = "transition"
    enabled = true

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    expiration {
      days = 180
    }
  }

replication_configuration {
    role = "${aws_iam_role.ps-db-backups-replication.arn}"

    rules {
      id     = "ps-db-backups-replication"
      status = "Enabled"

      destination {
        bucket        = "${aws_s3_bucket.ps-db-backups-replica.arn}"
        storage_class = "GLACIER"
      }
    }
  }

server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm     = "AES256"
      }
    }
  }

}


resource "aws_iam_role" "ps-server-backups-replication" {
  name = "ps-server-backups-replication"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "s3.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
POLICY
}

resource "aws_iam_policy" "ps-server-backups-replication" {
  name = "ps-server-backups-replication"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:GetReplicationConfiguration",
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "${aws_s3_bucket.ps-server-backups.arn}"
      ]
    },
    {
      "Action": [
        "s3:GetObjectVersion",
        "s3:GetObjectVersionAcl"
      ],
      "Effect": "Allow",
      "Resource": [
        "${aws_s3_bucket.ps-server-backups.arn}/*"
      ]
    },
    {
      "Action": [
        "s3:ReplicateObject",
        "s3:ReplicateDelete"
      ],
      "Effect": "Allow",
      "Resource": "${aws_s3_bucket.ps-server-backups-replica.arn}/*"
    }
  ]
}
POLICY
}

resource "aws_iam_policy_attachment" "ps-server-backups-replication" {
  name       = "ps-server-backups"
  roles      = ["${aws_iam_role.ps-server-backups-replication.name}"]
  policy_arn = "${aws_iam_policy.ps-server-backups-replication.arn}"
}

resource "aws_s3_bucket" "ps-server-backups-replica" {
  bucket = "ps-server-backups-replica"
  acl    = "private"
  region = "us-east-1"
  provider = "aws.us"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket" "ps-server-backups" {
  bucket = "ps-server-backups"
  acl    = "private"
  region = "eu-west-1"

  versioning {
    enabled = true
  }

  lifecycle_rule {
    id      = "transition"
    enabled = true

    transition {
      days          = 30
      storage_class = "STANDARD_IA" # or "ONEZONE_IA"
    }

    expiration {
      days = 180
    }
  }

replication_configuration {
    role = "${aws_iam_role.ps-server-backups-replication.arn}"

    rules {
      id     = "ps-server-backups-replication"
      status = "Enabled"

      destination {
        bucket        = "${aws_s3_bucket.ps-server-backups-replica.arn}"
        storage_class = "STANDARD"
      }
    }
  }


}



Debug Output

https://gist.github.com/jira-zz/1d9fecf3de5c877bbb41a7f37e7a8a6d

Expected Behavior

terraform should generate a plan

Actual Behavior

Error: insufficient items for attribute "destination"; must have at least 1

on main.tf line 142, in resource "aws_s3_bucket" "ps-db-backups":
142: server_side_encryption_configuration {

Steps to Reproduce

  1. terraform plan
bug serviciam servics3 upstream-terraform

Most helpful comment

I'm also seeing this on a google_compute_region_instance_group_manager resource.

resource "google_compute_region_instance_group_manager" "this" {
  provider = "google-beta"

  name = "${var.id}-${var.name}-instance-group"

  base_instance_name         = "${var.id}-${var.name}"
  region                     = var.region
  distribution_policy_zones  = data.google_compute_zones.this.names

  target_size  = var.instance_count

  wait_for_instances = true

  version {
    instance_template = google_compute_instance_template.this.self_link
    name              = "latest"
  }

  named_port {
    name = "ui"
    port = 6688
  }
}

Results in:

google_compute_instance_template.this: Refreshing state... [id=<redacted>]

Error: insufficient items for attribute "version"; must have at least 1

Terraform 0.12.3
GCP Beta ~> 2.10

Considering the same error across 3 different resources, is this a generic issue with 0.12+?

All 10 comments

Not sure if it is a generic error in the AWS provider or depending on the resource but the same happens with aws_cloudwatch_alarm resource:

module.ecs.aws_cloudwatch_metric_alarm.low-cpu-credits-spot: Refreshing state... [id=ecs-autoscaling-group-spot-cpu-credits-below-30]
--
聽
Error: insufficient items for attribute "input_format_configuration"; must have at least 1

Only happens after upgrading to terraform 0.12.

I'm also seeing this on a google_compute_region_instance_group_manager resource.

resource "google_compute_region_instance_group_manager" "this" {
  provider = "google-beta"

  name = "${var.id}-${var.name}-instance-group"

  base_instance_name         = "${var.id}-${var.name}"
  region                     = var.region
  distribution_policy_zones  = data.google_compute_zones.this.names

  target_size  = var.instance_count

  wait_for_instances = true

  version {
    instance_template = google_compute_instance_template.this.self_link
    name              = "latest"
  }

  named_port {
    name = "ui"
    port = 6688
  }
}

Results in:

google_compute_instance_template.this: Refreshing state... [id=<redacted>]

Error: insufficient items for attribute "version"; must have at least 1

Terraform 0.12.3
GCP Beta ~> 2.10

Considering the same error across 3 different resources, is this a generic issue with 0.12+?

Hi, I noticed that deleting the .tfstate file allows terraform plan to work. Any ideas why?

I'm seeing the same with 0.12.4, but only when I forget to do this for beta resources:

terraform import -provider=google-beta

I was experiencing a similar issue with the resource aws_db_security_group. At every terraform apply I would get the following vague error.

Error: insufficient items for attribute "ingress"; must have at least 1

I worked around it by manually deleting those aws_db_security_group resources from the tfstate and then deleting the RDS security groups from the AWS Console web ui. At this point my next terraform apply recreated those resources and didn't throw any error.

Any update on this and related issues of "insufficient items for attribute xyz"? This is making upgrading to TF 0.12 impossible.

I have recently encountered this as well. It works in some workspaces but not others when they are functionally identical with their s3 resources. It is in a module and I have several buckers so I cannot determine if there is a specific bucket configuration associated with this.

Reverting to previously known good states does not resolve this. This unfortunately has blocked all further terraform edits or applications.

There are some upstream Terraform issues currently being fixed to cover this (e.g. https://github.com/hashicorp/terraform/pull/22478). When there is an appropriate Terraform CLI or Terraform AWS Provider release that covers this issue, more information will be added here.

We're having the same issue. Is there a workaround that we can use until this is fixed?

+1

Was this page helpful?
0 / 5 - 0 ratings