Terraform-provider-aws: Error on plan refreshing firehose state: insufficient items for attribute output_format_configuration

Created on 13 Jun 2019  ·  11Comments  ·  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

Terraform v0.12.1
+ provider.aws v2.14.0

Affected Resource(s)

  • aws_kinesis_firehose_delivery_stream

Terraform Configuration Files

resource "aws_kinesis_firehose_delivery_stream" "brandslangen" {
  name        = "brandslangen"
  destination = "extended_s3"

  extended_s3_configuration {
    role_arn   = "${aws_iam_role.rollen.arn}"
    bucket_arn = "${aws_s3_bucket.hinken.arn}"
  }
}

resource "aws_s3_bucket" "hinken" {
  bucket = "hinken.supercorp.io"
}

resource "aws_iam_role" "rollen" {
  name               = "rollen"
  assume_role_policy = "${data.aws_iam_policy_document.rollen_trust.json}"
}

data "aws_iam_policy_document" "rollen_trust" {
  statement {
    principals {
      type        = "Service"
      identifiers = ["firehose.amazonaws.com"]
    }

    actions = [
      "sts:AssumeRole",
    ]
  }
}

Debug Output

Full log

https://gist.github.com/staffan-einarsson/c7394b5fc331ffc7ff08f2f8ae48d5ca

Error lines

2019/06/13 08:46:04 [DEBUG] ReferenceTransformer: "aws_kinesis_firehose_delivery_stream.brandslangen" references: []
2019/06/13 08:46:04 [TRACE] Completed graph transform *terraform.ReferenceTransformer (no changes)
2019/06/13 08:46:04 [TRACE] Executing graph transform *terraform.RootTransformer
2019/06/13 08:46:04 [TRACE] Completed graph transform *terraform.RootTransformer (no changes)
2019/06/13 08:46:04 [TRACE] vertex "aws_kinesis_firehose_delivery_stream.brandslangen": entering dynamic subgraph
2019/06/13 08:46:04 [TRACE] dag/walk: updating graph
2019/06/13 08:46:04 [ERROR] <root>: eval: *terraform.EvalReadState, err: insufficient items for attribute "output_format_configuration"; must have at least 1
2019/06/13 08:46:04 [ERROR] <root>: eval: *terraform.EvalSequence, err: insufficient items for attribute "output_format_configuration"; must have at least 1
2019/06/13 08:46:04 [TRACE] [walkRefresh] Exiting eval tree: aws_kinesis_firehose_delivery_stream.brandslangen
2019/06/13 08:46:04 [TRACE] vertex "aws_kinesis_firehose_delivery_stream.brandslangen": visit complete
2019/06/13 08:46:04 [TRACE] vertex "aws_kinesis_firehose_delivery_stream.brandslangen": dynamic subgraph encountered errors

Expected Behavior

No errors. In-memory state should be refreshed and plan should be shown.

Actual Behavior

In-memory state refresh produces error.

Error: insufficient items for attribute "output_format_configuration"; must have at least 1

Steps to Reproduce

Assuming an empty initial terraform state and the configuration file shown above. For actual logs and state see https://gist.github.com/staffan-einarsson/c7394b5fc331ffc7ff08f2f8ae48d5ca.

  1. terraform apply to create the firehose, bucket and role, and produce the terraform state shown in before_manual_touch.tfstate.
  2. (Optional) aws firehose describe-delivery-stream --delivery-stream-name brandslangen to print AWS description of the firehose, shown in describe_api_call_before_manual_touch.log.
  3. Open AWS Console and navigate to the firehose brandslangen.
  4. Click on the Edit button, then immediately on the Save button, without making any changes.
  5. (Optional) aws firehose describe-delivery-stream --delivery-stream-name brandslangen to print AWS description of the firehose, shown in describe_api_call_after_manual_touch.log.
  6. (Optional) terraform refresh to refresh the state file, shown in after_manual_touch.tfstate.
  7. terraform plan which will fail to parse the terraform state.

Additional details

Diffing describe_api_call_before_manual_touch.log and describe_api_call_after_manual_touch.log shows that during the manual modification AWS has added some attributes to the firehose configuration, even though it means the same thing.

10c10
<         "VersionId": "1",
---
>         "VersionId": "2",
11a12
>         "LastUpdateTimestamp": 1560407082.658,
18a20
>                     "ErrorOutputPrefix": "",
34a37
>                     "ErrorOutputPrefix": "",
46c49,56
<                     "S3BackupMode": "Disabled"
---
>                     "ProcessingConfiguration": {
>                         "Enabled": false,
>                         "Processors": []
>                     },
>                     "S3BackupMode": "Disabled",
>                     "DataFormatConversionConfiguration": {
>                         "Enabled": false
>                     }

Diffing before_manual_touch.tfstate and after_manual_touch.tfstate shows that terraform does not interpret these configurations as equivalent, but instead records state that in some cases would lead to unintended state diffs, and in this case even produces corrupt state (input_format_configuration, output_format_configuration and schema_configuration are required attributes that can not be empty).

4c4
<   "serial": 21,
---
>   "serial": 22,
69c69
<             "tags": null,
---
>             "tags": {},
104c104,111
<                 "data_format_conversion_configuration": [],
---
>                 "data_format_conversion_configuration": [
>                   {
>                     "enabled": false,
>                     "input_format_configuration": [],
>                     "output_format_configuration": [],
>                     "schema_configuration": []
>                   }
>                 ],
108c115,120
<                 "processing_configuration": [],
---
>                 "processing_configuration": [
>                   {
>                     "enabled": false,
>                     "processors": []
>                   }
>                 ],
120,121c132,133
<             "tags": null,
<             "version_id": "1"
---
>             "tags": {},
>             "version_id": "2"
158c170
<             "tags": null,
---
>             "tags": {},

Here we can also see the same thing happening with processors in processing_configuration, which seems to be what #4392 is about.

References

  • Very likely related to #6053 and #4392
bug servicfirehose

Most helpful comment

It's been >month since last update. Meanwhile terraform 0.12 _cannot_ be used to create firehose delivery streams!

This is not a corner case. Very basic examples from the documentation do not work because of this bug.

Can we have some update please? We're stuck after migrating to 0.12 and are now forced to manage our resources manually.

All 11 comments

https://github.com/terraform-providers/terraform-provider-aws/blob/358230748d4209708e14c8c36873d98c16e38dfe/aws/resource_aws_kinesis_firehose_delivery_stream.go#L326-L339

My guess would be to handle this here. We could move up the nested flatten calls and add a condition that if dfcc.Enabled is false and all of the three configurations are nil then that should also result in []map[string]interface{}{}, equivalent to when dfcc is nil.

same problem here +1

This might be related to #9048, there seem to be a lot of 'insufficent items for attribute xyz' errors in terraform 0.12. It has happened with resources:

  • aws_s3_bucket
  • aws_cloudwatch_metric_alarm
  • aws_kinesis_firehose_delivery_stream

This makes upgrading to TF 0.12 impossible for us currently.

It's been >month since last update. Meanwhile terraform 0.12 _cannot_ be used to create firehose delivery streams!

This is not a corner case. Very basic examples from the documentation do not work because of this bug.

Can we have some update please? We're stuck after migrating to 0.12 and are now forced to manage our resources manually.

Hitting this with the GCP provider in google_compute_region_instance_group_manager, which is completely breaking automated rollouts. My specific instance is detailed here: https://github.com/terraform-providers/terraform-provider-google/issues/4169

Like @mikea mentioned, very basic examples (e.g. the example from the GCP provider docs) trigger this bug. Hoping somebody can provide an update here soon.

I was able to successfully refresh by pulling state, removing the data_format_conversion_configuration state from the state file, and pushing state

however, running plan still doesn't work

For the original issue report, the aws_kinesis_firehose_delivery_stream resource needed some additional handling to better normalize a disabled data_format_conversion_configuration while also ignoring differences between a lack of processing_configuration configuration and a disabled processing configuration with no processors. Those fixes have been merged and will be released with version 2.25.0 of the Terraform AWS Provider later this week.

For the other cases of insufficient items for attribute with other Terraform resources, there are some upstream Terraform changes occurring to fix that particular issue (e.g. https://github.com/hashicorp/terraform/pull/22478). Please ensure there is a covering GitHub bug report for those resource problems and those GitHub issues will be resolved when the upstream changes are appropriately handled either in a Terraform CLI release or Terraform AWS Provider release. Thanks.

This has been released in version 2.25.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

The bug persist with me even after using version 2.25.0 of AWS provider. Im still getting Error: insufficient items for attribute.

It also does not matter if i add those parameters anyway

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings