_This issue was originally opened by @DenisBY as hashicorp/terraform#14206. It was migrated here as part of the provider split. The original body of the issue is below._
Terraform ignores changing values in launch_specification of aws_spot_fleet_reques
Terraform v0.9.4
variable "volume_type" {
default = "gp2"
}
resource "aws_spot_fleet_request" "fleet" {
iam_fleet_role = "arn:aws:iam::123456789012:role/aws-ec2-spot-fleet-role"
spot_price = "0.114"
allocation_strategy = "lowestPrice"
target_capacity = 2
replace_unhealthy_instances = true
terminate_instances_with_expiration = true
valid_until = "3000-01-01T00:00:00Z"
launch_specification {
ami = "${var.ecs_ami}"
instance_type = "${element(var.instance_types[var.ecs_cluster_name], 0)}"
iam_instance_profile = "ecs-instance-profile"
key_name = "${var.ssh_key_name}"
monitoring = "false"
vpc_security_group_ids = ["${split(",", lookup(var.security_groups, var.env_name))}"]
subnet_id = "${lookup(var.subnet_id_A, var.env_name)}"
user_data = "${data.template_file.bootstrap.rendered}"
weighted_capacity = 1
root_block_device = {
volume_type = "${var.volume_type}"
volume_size = "${var.root_volume_size}"
delete_on_termination = true
}
ebs_block_device = {
device_name = "/dev/xvdcz"
volume_size = "${var.data_volume_size}"
volume_type = "${var.volume_type}"
delete_on_termination = true
}
}
}
Spot fleet launch_specification is changed
Terraform reports: No changes. Infrastructure is up-to-date.
Please list the steps required to reproduce the issue, for example:
terraform apply
volume_type
or volume_size
terraform plan
or terraform apply
I'm also seeing this issue with changing the user data...
This is a serious bug in my opinion!
The workaround for this is to change configuration for the overall spot fleet request, such as spot_price.
馃憤
This is affecting us quite a bit.
+1
+1
+1
+1
+1
+1
+1
Same issue here.
For me, it was the spot_fleet_request.launch_specification.user_data
— I wanted it to be empty, so I left it undefined.
~ aws_spot_fleet_request.example
<snip>
launch_specification.43392833.user_data: "da39a3ee5e6b4b0d3255bfef95601890afd80709" => ""
<snip>
The workaround was to explicitly set user_data = ""
in the launch_specification.
I ran into this bug today. Changing user_data
of the launch_specification
(Update the app) did nothing.
Hi! Still no updates?
I looked into this a bit, here's what I found so far (correct me if I'm wrong anywhere):
The reason why this is happening seems to be more or less straightforward: The launch specification has a custom hash function that only looks at a few attributes (ami
, availability_zone
, subnet_id
, instance_type
, spot_price
).
So in theory, adding more attributes there would solve the issue (my guess is we could also remove the whole hash-function and rely on the default behaviour which, as I understand, would hash all the attributes which is probably what we'd want).
When I tried that though, an issue pops up with user_data
(which, to me, is the most interesting attribute): For this attribute, we have a custom StateFunc
that replaces the actual user_data value with its SHA1. It seems that when we try to hash the launch_specification
, in some circumstances the SHA1 is calculated twice, leading to different hashes for a launch_specification
that should be the same. Not sure how to fix that except by checking if user-data is already hashed, which will be hacky (as in, "is it a hex-string and has the right length?")
Thoughts?
I just got bit by this.
My expectation as a user is that terraform should be sensitive to and enforcing more or less _every_ attribute I've specified in the terraform code unless I blacklist it with ignore_changes. Resources that violate that (like this) should have really big warnings in the documentation if there's a reason they can't be fixed.
Just submitted two PRs: One to add a warning to the docs, the other one to add some more attributes to the hashing (but continuing to ignore user_data
).
While this might improve to the situation somewhat for some folks, I'd guess overall it's not a very satisfying situation.
Thankfully, Launch Templates combined with EC2 Fleet or AutoScaling Group seem to achieve pretty much the same thing (spinning up a set of spot instances) as Spot Fleets so the most pragmatic approach might be to migrate away from this particular resource. That's the approach we took in my current project and found little issues so far.
Most helpful comment
I'm also seeing this issue with changing the user data...
This is a serious bug in my opinion!