```terraform -v
Terraform v0.12.6
### Terraform Configuration Files
variable "worker_instance_count" {
default = 3
}
variable "worker_ebs_storage_count" {
default = 3
}
resource "aws_instance" "data_cluster_worker" {
ami = "ami-0a313d6098716f372"
instance_type = "t2.micro"
count = var.worker_instance_count
}
resource "aws_ebs_volume" "data-ebs-volumes" {
count = "${var.worker_instance_count * var.worker_ebs_storage_count}"
availability_zone = var.availability_zone
size = var.worker_ebs_storage_size
type = "gp2"
}
resource "aws_volume_attachment" "data-ebs-volumes-attach" {
count = var.worker_instance_count * var.worker_ebs_storage_count
volume_id = aws_ebs_volume.data-ebs-volumes..id[count.index]
device_name = element(var.device_names, count.index)
instance_id = element(aws_instance.data_cluster_worker..id, count.index)
}
variable "device_names" {
default = ["/dev/xvdf",
"/dev/xvdg",
"/dev/xvdh",
"/dev/xvdi",
"/dev/xvdj",
"/dev/xvdk",
"/dev/xvdl",
"/dev/xvdm"]
}
### Debug Output
module.data-cluster.aws_instance.data_cluster_worker[1]: Refreshing state... [id=i-XXXXXXXXX]
module.data-cluster.aws_instance.data_cluster_worker[2]: Refreshing state... [id=i-XXXXXXXXX]
module.data-cluster.aws_instance.data_cluster_worker[0]: Refreshing state... [id=i-XXXXXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[2]: Refreshing state... [id=vol-XXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[8]: Refreshing state... [id=vol-XXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[3]: Refreshing state... [id=vol-XXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[1]: Refreshing state... [id=vol-XXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[6]: Refreshing state... [id=vol-XXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[0]: Refreshing state... [id=vol-XXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[5]: Refreshing state... [id=vol-XXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[7]: Refreshing state... [id=vol-XXXXXX]
module.data-cluster.aws_ebs_volume.data-ebs-volumes[4]: Refreshing state... [id=vol-XXXXXX]
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 6
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 1
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 4
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 5
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 2
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 7
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 0
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 8
The given key does not identify an element in this collection value.
Error: Invalid index
on ../modules/data-cluster/main.tf line 127, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
127: volume_id = aws_ebs_volume.data-ebs-volumes.*.id[count.index]
|----------------
| aws_ebs_volume.data-ebs-volumes is empty tuple
| count.index is 3
The given key does not identify an element in this collection value.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 6
Call to function "element" failed: cannot use element function with an empty
list.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 1
Call to function "element" failed: cannot use element function with an empty
list.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 4
Call to function "element" failed: cannot use element function with an empty
list.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 5
Call to function "element" failed: cannot use element function with an empty
list.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 2
Call to function "element" failed: cannot use element function with an empty
list.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 7
Call to function "element" failed: cannot use element function with an empty
list.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 0
Call to function "element" failed: cannot use element function with an empty
list.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 8
Call to function "element" failed: cannot use element function with an empty
list.
Error: Error in function call
on ../modules/data-cluster/main.tf line 129, in resource "aws_volume_attachment" "data-ebs-volumes-attach":
129: instance_id = element(aws_instance.data_cluster_worker.*.id,count.index)
|----------------
| aws_instance.data_cluster_worker is empty tuple
| count.index is 3
Call to function "element" failed: cannot use element function with an empty
list.
```
It should have created 3 instances, 3 volumes per instance and attached 3 volumes to each instance.
Instead it throws an error, see the output above.
terraform plan
None.
Note that this has been attempted with several variations that have NOT helped:
I came here because I had a similar error. First of all, I'm quite sure that this is a Terraform bug.
Here is the short-term fix I did to unbreak my Terraform deployment. I'm describing what I did to my environment, using the values from your example. I hope this will work for you, but of course, I could not test this, since I don't have access to your deployment.
One error message tries to tell us that Terraform thinks aws_ebs_volume.data-ebs-volumes
is an empty list (or tuple, or whatever collection). We know that aws_ebs_volume.data-ebs-volumes
is not empty.
I think I got into this state by deleting resources out of band (without Terraform). Terraform should be able to detect and handle this drift, so I'm still confident that this is a bug.
Here is how I recovered:
For simplicity, I'm only describing the fix for aws_ebs_volume.data-ebs-volumes
. You have to apply the same trick to aws_instance.data_cluster_worker
simultaneously (not sequentially).
aws_ebs_volume.data-ebs-volumes
) in the count
field. For example count = min(var.worker_instance_count * var.worker_ebs_storage_count, length(aws_ebs_volume.data-ebs-volumes))
.terraform plan
again. Do NOT run apply
, since my change most likely changes the count
and has bad consequences.terraform refresh
. The bug should disappear now.count
field.Note: this may be the same issue as https://github.com/hashicorp/terraform/issues/21917#issuecomment-506948754
One more thing: I see this bug is tagged as service/ec2
. I encountered the error with a GCP resource and with a resource of a custom provider. All resources were counted. → This is probably an error in Terraform core.
Can confirm this, @diekmann 's count
fix got me out of a bad state.
This does seem to be a core bug with 0.12
.
+1, I have the same problem with 0.12.16 and 0.12.17 tested with eks module in cluster. the terraform refresh , not solve the problem in the next execution never. It begins to fail at once because the code worked without problems, so reversing changes is not viable.
Have this same issue with Terraform v0.12.18 and creating route table associations with subnets (that ive admittedly manually deleted while testing) using count. Seems like a serious core bug.
I fixed the issue using @diekmann workaround steps BUT i first set count to 0, applied, then set count back to normal and applied again.
Ran into this issue with v0.12.20
using count to conditionally enable / disable a resource and referencing it elsewhere, was also able to resolve it by setting the count value to something else, refreshing, and re-creating the change before applying again.
Ran into this issue as well with v0.12.17 using count to conditionally enable / disable a resource (apply with auto-approve) and was able to resolve it by running plan and apply again (both without auto-approve). Did not have to change the count as others have pointed out.
With Terraform 0.12.20 and AWS provider version 2.48.0, I ran into what appears to be the same issue described here but with an aws_s3_bucket
resource. Here's my reproduction steps:
terraform apply
the following config:resource "aws_s3_bucket" "test" {
count = 1
bucket = "test"
}
output "bucket_name" {
value = aws_s3_bucket.test[0].bucket
}
aws s3 rb s3://test
terraform apply
again and receive the following error:aws_s3_bucket.test[0]: Refreshing state... [id=test]
Error: Invalid index
on test.tf line 15, in output "bucket_name":
15: value = aws_s3_bucket.test[0].bucket
|----------------
| aws_s3_bucket.test is empty tuple
The given key does not identify an element in this collection value.
As suggested earlier in this chain, I was able to get past the error by:
a. Either changing the value in the count
variable in the aws_s3_bucket
resource to something other than 1
or commenting out the output
variable.
b. Running terraform refresh
.
c. Reverting the changes in step a.
d. Running terraform apply
again - this time successfully.
Trying to migrate from terraform 11 to 12 and terraform refresh doesn't seem to do anything in regards to getting the data sources to populate prior to an initial terraform12.20 apply. However, I can't run an initial terraform12.20 apply because terraform wants to change all of the routes in my vpc peering table. Looks like the order of the route table ids changed from unsorted in terraform11 to sorted in terraform12 which means tolist(data.aws_route_tables.this_vpc_rts.ids)[0] is now at a different index and terraform wants to replace it.
I upgraded from Terraform 12.10 to 12.24 and got a very similar issue with an AWS stack.
Error: Invalid index
on alb.tf line 32, in module "redacted":
32: target_redacted_list = [aws_instance.redacted[0].id, aws_instance.redacted[1].id]
|----------------
| aws_instance.redacted is empty tuple
The given key does not identify an element in this collection value.
Error: Invalid index
on alb.tf line 32, in module "redacted":
32: target_redacted_list = [aws_instance.redacted[0].id, aws_instance.redacted[1].id]
|----------------
| aws_instance.redacted is empty tuple
The given key does not identify an element in this collection value.
...
Downgrading to 12.10 fixed the issue. AWS Provider version 2.55.
terraform 0.12.24 | aws provider 2.58
same issue, i am using count to enable/disable resources and wherever i reference those i had to use the "[0]" infix
This is for completely destroyed environment after deleting all relevant .terraform local and remote state files.
Had to add a few conditional for some of the resources in ways i didnt like, the hcl looks so awful right now.. I also use template_file resource so i had to add a count for enabling/disabling it else i get weird behaviour and i need to start escaping stuff (even more hideous), plus had to add the same "root" condition at places in the template where i referenced outside resources that were also count based for enable/disable
not a big fan of the whole use of count to put conditionals for resources, would prefer to see an attribute to enable/disable and then terraform would know to do the same on the dependencies
Facing the same issue with count on oci provider and just plain null provisioners as well. I have already upvoted this issue. It is has been a few months now since moving to 0.12, that I am encountering these issues regularly for different resources and none of their counts are empty. I have encountered this issue sometimes in plan phase and sometimes during apply/destroy phase. Seems to be a core issue with 0.12. Any timeline on when this issue will be fixed ?
Im facing the same issue
Terraform Version 0.12.17
resource "aws_cloudwatch_metric_alarm" "tableau_server_error_alarm" {
depends_on = [aws_instance.ec2_server]
count = var.environment.environment == var.environment.environment ? 1 : 0
alarm_description = format("%s|%s|%s|%s", data.null_data_source.environment.outputs["SDLC"], "INFO", "${var.environment.app_name}", "Error while executing ${var.app_name}-server(Threshold - Errors > 0)")
alarm_name = "${var.app_name}-server-error"
comparison_operator = var.cw_comparison_operator
evaluation_periods = var.cw_evaluation_periods
threshold = var.cw_alarm_threshold_4xx
treat_missing_data = var.cw_treat_missing_data
alarm_actions = [var.funnel_sns]
insufficient_data_actions = []
metric_name = var.cw_metric_name
namespace = var.cw_namespace
period = var.cw_period
statistic = var.cw_statistic
dimensions = {
InstanceId = aws_instance.ec2_server[count.index].id
# InstanceId = "${element(aws_instance.ec2_server.*.id, count.index)}"
}
tags = merge(map("Name", "${var.app_name}-server-error"), merge(var.tags, var.s3_tags))
}
resource "aws_instance" "ec2_server" {
count = var.ec2_instance_count
ami = var.vpc_config.ec2_ami
instance_type = var.ec2_instance_type
subnet_id = element(distinct(compact(concat(list(var.vpc_config.ec2_subnet_id), var.ec2_subnet_ids))), count.index)
key_name = var.ec2_key_name
monitoring = var.ec2_monitoring
ebs_optimized = true
vpc_security_group_ids = [module.sg.sg_id]
iam_instance_profile = aws_iam_instance_profile.instance_profile.name
user_data = data.template_file.user_data_all_euro.rendered
lifecycle {
ignore_changes = [private_ip, root_block_device, ebs_block_device]
}
volume_tags = tags = merge(map("Name", "${var.app_name}-server"), merge(var.tags, var.s3_tags))
dynamic "root_block_device" {
for_each = var.root_block_device
content {
volume_size = lookup(root_block_device.value, "volume_size", "gp2")
volume_type = lookup(root_block_device.value, "volume_type", "200")
encrypted = lookup(root_block_device.value, "encrypted", true)
kms_key_id = lookup(root_block_device.value, "kms_key_id", data.aws_kms_key.ebs.arn)
}
}
dynamic "ebs_block_device" {
for_each = var.ebs_block_device
content {
device_name = ebs_block_device.value.device_name
encrypted = lookup(ebs_block_device.value, "encrypted", true)
kms_key_id = lookup(ebs_block_device.value, "kms_key_id", data.aws_kms_key.ebs.arn)
volume_size = lookup(ebs_block_device.value, "volume_size", "200")
volume_type = lookup(ebs_block_device.value, "volume_type", "gp2")
delete_on_termination = lookup(ebs_block_device.value, "delete_on_termination", true)
}
}
tags = merge(map("Name", "${var.app_name}-server"), merge(var.tags, var.s3_tags))
}
output "server_id" {
description = "Tableau Server Id:"
value = "${join(", ", aws_instance.ec2_server.*.id)}"
}
facing similar behavior after upgrading from 12.7 to 12.24 with AWS provider version = "~> 2.25.0". After deploying with Terraform, some other mechanism terminated the instance, upon terraform rerun to deploy again returns the below. reverting to 12.7 works without issue.
Error: Invalid index
on outputs.tf line 106, in output "bec01_id":
106: value = "aws_instance.bec01[0].id"
|----------------
| aws_instance.bec01 is empty tuple
The given key does not identify an element in this collection value.
Error: Invalid index
on outputs.tf line 109, in output "bec01_private_ip":
109: value = "aws_instance.bec01[0].private_ip"
|----------------
| aws_instance.bec01 is empty tuple
The given key does not identify an element in this collection value.
The only way to bypass that error or the workaround that I have found is to use the method try
, example:
output "example" {
value = try(aws_instance.ec2_instance[0].private_ip, "")
}
Hope it's useful for someone else.
https://www.terraform.io/docs/configuration/functions/try.html
on 12.24 if we destroy an instance in AWS outside of terraform(via console or some other script), and rerun terraform to recreate that instance we are now seeing this behavior when running terraform plan. Previously this was NOT and issue on 12.07 or 11.07. Expected behavior a fresh redeploy of the below resources without error.
steps in AWS console that created this error for terraform:
terminated instance in AWS
destroyed 3 EBS volumes attached to instance
destroyed ENI for instance
aws_volume_attachment.job01_u01: Refreshing state... [id=vai-1800376936]
aws_volume_attachment.job01_u80: Refreshing state... [id=vai-845612090]
aws_volume_attachment.job01_u05: Refreshing state... [id=vai-1388828358]
Error: Invalid index
on resources.tf line 455, in resource "aws_volume_attachment" "job01_u80":
455: instance_id = aws_instance.job01[0].id
|----------------
| aws_instance.job01 is empty tuple
The given key does not identify an element in this collection value.
Additional notes:
Terraform instance configuration block is using count = 1, and always has been.
For anyone else, below conditional to null wont work because instance ID is a required filed:
resource "aws_volume_attachment" "job01_u80" {
device_name = "job01_u80"
volume_id = aws_ebs_volume.job01_u80.id
instance_id = length(aws_instance.job01) > 0 ? aws_instance.job01[0].id : null
}
Error: "instance_id": required field is not set
on resources.tf line 451, in resource "aws_volume_attachment" "job01_u80":
451: resource "aws_volume_attachment" "job01_u80" {
md5-543fee4c68aa18babb7d2292ff15df3a
resource "aws_volume_attachment" "job01_u80" {
device_name = "job01_u80"
volume_id = aws_ebs_volume.job01_u80.id
instance_id = length(aws_instance.job01) > 0 ? aws_instance.job01[0].id : ""
}
I would be careful here, as in this allows you to get around plans refresh phase, but if youre CI/CD your IaC with auto-approve flag on apply, imo this could cause additional errors if condition is evaluated an empty. which then sounds like deploying volumes with no instance to attach to or erroring out on apply.
It appears to be possible to encounter this if you create a aws_vpc_endpoint
resource with count
, and the VPCE transitions to "rejected" or "deleted" state outside of Terraform.
2020-06-03T13:37:22.205-0500 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: <Response><Errors><Error><Code>InvalidVpcEndpointId.NotFound</Code><Message>The Vpc Endpoint Id 'vpce-01460b6963ec88e08' does not exist</Message></Error></Errors><RequestID>66b785da-cc13-4a6c-b86a-459ce9cc8916</RequestID></Response>
2020-06-03T13:37:22.205-0500 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/03 13:37:22 [DEBUG] [aws-sdk-go] DEBUG: Validate Response ec2/DescribeVpcEndpoints failed, attempt 0/25, error InvalidVpcEndpointId.NotFound: The Vpc Endpoint Id 'vpce-01460b6963ec88e08' does not exist
2020-06-03T13:37:22.205-0500 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: status code: 400, request id: 66b785da-cc13-4a6c-b86a-459ce9cc8916
2020-06-03T13:37:22.205-0500 [DEBUG] plugin.terraform-provider-aws_v2.64.0_x4: 2020/06/03 13:37:22 [WARN] VPC Endpoint (vpce-01460b6963ec88e08) in state (deleted), removing from state
In my case, the fix was to terraform state rm
the offending state which linked to the VPCE. taint
did not work because it resulted in the same "invalid index" error. refresh
had no effect either.
The only way to bypass that error or the workaround that I have found is to use the method
try
, example:output "example" { value = try(aws_instance.ec2_instance[0].private_ip, "") }
Hope it's useful for someone else.
https://www.terraform.io/docs/configuration/functions/try.html
Worked for me for the resource level using try
The only way to bypass that error or the workaround that I have found is to use the method
try
, example:output "example" { value = try(aws_instance.ec2_instance[0].private_ip, "") }
Hope it's useful for someone else.
https://www.terraform.io/docs/configuration/functions/try.htmlWorked for me for the resource level using
try
Where did you add this try function? Could you give me an example from your code?
The only way to bypass that error or the workaround that I have found is to use the method
try
, example:output "example" { value = try(aws_instance.ec2_instance[0].private_ip, "") }
Hope it's useful for someone else.
https://www.terraform.io/docs/configuration/functions/try.htmlWorked for me for the resource level using
try
Where did you add this try function? Could you give me an example from your code?
Sorry for late reply,
I used it in module where my cron_job
module was dependent on lambda
module and sometimes I don't want to create lambda but I want to setup cron job, in that case lambda_function_arn
block is getting null value which cause terraform failure.
(there could be another better way) but I solve it like this way, sharing my code for better understanding:
```
module "cron_job" {
source = "../modules/lambda_cron"
lambda_create = var.lambda_create
event_name = "${local.lambda_function_name}"
schedule_expression = var.lambda_cron_schedule_expression_1
lambda_cron_statement_id = "AllowExecutionFromCloudWatch-1"
lambda_function_arn = try(module.lambda.lambda_function_arn[0], "")
lambda_function_name = "${local.lambda_glue_function_name}"
}
@learnhub17 Thank you for the reply.
Most helpful comment
I came here because I had a similar error. First of all, I'm quite sure that this is a Terraform bug.
Here is the short-term fix I did to unbreak my Terraform deployment. I'm describing what I did to my environment, using the values from your example. I hope this will work for you, but of course, I could not test this, since I don't have access to your deployment.
One error message tries to tell us that Terraform thinks
aws_ebs_volume.data-ebs-volumes
is an empty list (or tuple, or whatever collection). We know thataws_ebs_volume.data-ebs-volumes
is not empty.I think I got into this state by deleting resources out of band (without Terraform). Terraform should be able to detect and handle this drift, so I'm still confident that this is a bug.
Here is how I recovered:
For simplicity, I'm only describing the fix for
aws_ebs_volume.data-ebs-volumes
. You have to apply the same trick toaws_instance.data_cluster_worker
simultaneously (not sequentially).aws_ebs_volume.data-ebs-volumes
) in thecount
field. For examplecount = min(var.worker_instance_count * var.worker_ebs_storage_count, length(aws_ebs_volume.data-ebs-volumes))
.terraform plan
again. Do NOT runapply
, since my change most likely changes thecount
and has bad consequences.terraform refresh
. The bug should disappear now.count
field.Note: this may be the same issue as https://github.com/hashicorp/terraform/issues/21917#issuecomment-506948754