Terraform v0.11.7
+ provider.aws v1.34.0
+ provider.template v1.0.0
resource "aws_kinesis_firehose_delivery_stream" "logs" {
name = "${aws_s3_bucket.logs.bucket}-firehose"
destination = "extended_s3"
kinesis_source_configuration {
kinesis_stream_arn = "${aws_kinesis_stream.cloudwatch-logs.arn}"
role_arn = "${aws_iam_role.firehose_role.arn}"
}
extended_s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.logs.arn}"
buffer_size = 30
buffer_interval = 900 # we prefer high interval (fewer files); 900 seconds is the max allowed
compression_format = "GZIP"
processing_configuration = [
{
enabled = "true"
processors = [
{
type = "Lambda"
parameters = [
{
parameter_name = "LambdaArn"
parameter_value = "${aws_lambda_function.kinesis-cloudwatch-newline.arn}"
}
]
}
]
}
]
cloudwatch_logging_options {
enabled = true
log_group_name = "${aws_cloudwatch_log_group.logs-firehose-logs.name}"
log_stream_name = "S3Delivery"
}
}
}
Too long to anonymize. Will try to provide if absolutely required.
Terraform should converge the stack such that when plan is run immediately after apply there should be no changes necessary.
Once the Firehose delivery stream is "dirty" there is no way to mark it as clean again in Terraform, short of actually deleting and re-creating the Firehose delivery stream. This is what it looks like when dirty:
~ aws_kinesis_firehose_delivery_stream.logs
extended_s3_configuration.0.data_format_conversion_configuration.#: "1" => "0"
extended_s3_configuration.0.data_format_conversion_configuration.0.enabled: "false" => "true"
aws_kinesis_firehose_delivery_stream in Terraformterraform applyterraform applyterraform plan will be dirty, despite the fact that apply was just runHi @nhnicwaller ๐ Sorry for the odd behavior! I imagine in this case the AWS console that is applying a "default" data processing configuration (set to false to match reality) to the actual resource. We'll likely need to teach the Terraform resource to ignore this difference automatically.
You have two options to workaround this in the meantime. ๐
Add a disabled data format conversion configuration in Terraform:
resource "aws_kinesis_firehose_delivery_stream" "logs" {
# ... other configuration ...
extended_s3_configuration {
# ... other configuration ...
data_format_conversion_configuration {
enabled = false
}
}
}
Although I believe in that case the Terraform resource will also require input_format_configuration/output_format_configuration to be added to the configuration as well, which is less than ideal.
Instead, you can tell Terraform to always ignore the problematic attributes using ignore_changes:
resource "aws_kinesis_firehose_delivery_stream" "logs" {
# ... other configuration ...
lifecycle {
ignore_changes = [
"extended_s3_configuration.0.data_format_conversion_configuration",
"extended_s3_configuration.0.data_format_conversion_configuration.0.enabled",
]
}
}
Hope this helps in the meantime.
Amazing response, thank you very much! Both techniques to silence this warning are very helpful to know until such time as the root cause is fixed.
I've just been bitten by this same problem. We're typically using 2.6.0, I tried with 2.14.0 in case it had been fixed. The lifecycle workaround works, though addressing this in a future release would be awesome.
With terraform 0.12.6 neither of the workarounds work. The lifecycle trick still results in a broken state after refresh (Error: insufficient items for attribute "input_format_configuration"; must have at least 1), and having a block with enabled set to false still requires creating a configuration.
The fix for the original issue has been merged and will release with version 2.25.0 of the Terraform AWS Provider, later this week.
This has been released in version 2.25.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
Hi @nhnicwaller ๐ Sorry for the odd behavior! I imagine in this case the AWS console that is applying a "default" data processing configuration (set to
falseto match reality) to the actual resource. We'll likely need to teach the Terraform resource to ignore this difference automatically.You have two options to workaround this in the meantime. ๐
Add a disabled data format conversion configuration in Terraform:
Although I believe in that case the Terraform resource will also require
input_format_configuration/output_format_configurationto be added to the configuration as well, which is less than ideal.Instead, you can tell Terraform to always ignore the problematic attributes using
ignore_changes:Hope this helps in the meantime.