0.9.1
Please list the resources as a list, for example:
resource "aws_spot_fleet_request" "nodes-elasticsearch-spot-production-k8s" {
iam_fleet_role = "arn:aws:iam::example:role/aws-ec2-spot-fleet-role"
spot_price = "0.293"
allocation_strategy = "lowestPrice"
target_capacity = 12
valid_until = "2019-11-04T12:00:00Z"
launch_specification {
instance_type = "m3.xlarge"
ami = "ami-fa5c7489"
key_name = "${aws_key_pair.kubernetes-production-k8s-example-com.id}"
availability_zone = "${aws_subnet.eu-west-1a-production-k8s-example-com.availability_zone}"
subnet_id = "${aws_subnet.eu-west-1a-production-k8s-example-com.id}"
user_data = "${file("${path.module}/data/user_data.node.elasticsearch.sh")}"
iam_instance_profile = "${aws_iam_instance_profile.nodes-production-k8s-example-com.id}"
vpc_security_group_ids = ["${aws_security_group.nodes-production-k8s-example-com.id}"]
associate_public_ip_address = true
root_block_device {
volume_size = "20"
volume_type = "gp2"
delete_on_termination = true
}
ebs_block_device {
device_name = "/dev/xvdg"
volume_type = "gp2"
volume_size = "100"
delete_on_termination = true
}
}
}
Sorry, it's pretty hard to strip private data from such output.
No panic.
No spot fleet request recreation.
Recreates every time.
terraform applyterraform apply -> recreatesThis probably what happened:
We use remote s3 state. It seems that this remove state correnctly tracks the id (sfr-*).
Also, see output from terraform plan (and apply)
allocation_strategy: "lowestPrice" => "lowestPrice"
client_token: "terraform-00baa70e260848a6cd1baa8a09" => "<computed>"
excess_capacity_termination_policy: "Default" => "Default"
iam_fleet_role: "arn:aws:iam::example:role/aws-ec2-spot-fleet-role" => "arn:aws:iam::example:role/aws-ec2-spot-fleet-role"
launch_specification.#: "1" => "1"
launch_specification.1125083684.ami: "ami-fa5c7489" => "" (forces new resource)
launch_specification.1125083684.associate_public_ip_address: "false" => "false"
launch_specification.1125083684.ebs_optimized: "false" => "false"
launch_specification.1125083684.iam_instance_profile: "nodes.production.k8s.example.com" => "" (forces new resource)
launch_specification.1125083684.instance_type: "m3.xlarge" => "" (forces new resource)
launch_specification.1125083684.monitoring: "false" => "false"
launch_specification.1125083684.spot_price: "" => ""
launch_specification.1125083684.user_data: "376162e0e0cb7583c27a7c77bbe89004db5dab87" => "" (forces new resource)
launch_specification.1125083684.weighted_capacity: "" => ""
launch_specification.3218522932.ami: "" => "ami-fa5c7489" (forces new resource)
launch_specification.3218522932.associate_public_ip_address: "" => "true"
launch_specification.3218522932.availability_zone: "" => "eu-west-1a" (forces new resource)
launch_specification.3218522932.ebs_block_device.#: "" => "1"
launch_specification.3218522932.ebs_block_device.3994770134.delete_on_termination: "" => "true" (forces new resource)
launch_specification.3218522932.ebs_block_device.3994770134.device_name: "" => "/dev/xvdg" (forces new resource)
launch_specification.3218522932.ebs_block_device.3994770134.encrypted: "" => "<computed>" (forces new resource)
launch_specification.3218522932.ebs_block_device.3994770134.iops: "" => "<computed>" (forces new resource)
launch_specification.3218522932.ebs_block_device.3994770134.snapshot_id: "" => "<computed>" (forces new resource)
launch_specification.3218522932.ebs_block_device.3994770134.volume_size: "" => "100" (forces new resource)
launch_specification.3218522932.ebs_block_device.3994770134.volume_type: "" => "gp2" (forces new resource)
launch_specification.3218522932.ebs_optimized: "" => "false"
launch_specification.3218522932.ephemeral_block_device.#: "" => "<computed>" (forces new resource)
launch_specification.3218522932.iam_instance_profile: "" => "nodes.production.k8s.example.com" (forces new resource)
launch_specification.3218522932.instance_type: "" => "m3.xlarge" (forces new resource)
launch_specification.3218522932.key_name: "" => "kubernetes.production.k8s.example.com" (forces new resource)
launch_specification.3218522932.monitoring: "" => "false"
launch_specification.3218522932.placement_group: "" => "<computed>" (forces new resource)
launch_specification.3218522932.root_block_device.#: "" => "1"
launch_specification.3218522932.root_block_device.0.delete_on_termination: "" => "true" (forces new resource)
launch_specification.3218522932.root_block_device.0.iops: "" => "<computed>" (forces new resource)
launch_specification.3218522932.root_block_device.0.volume_size: "" => "20" (forces new resource)
launch_specification.3218522932.root_block_device.0.volume_type: "" => "gp2" (forces new resource)
launch_specification.3218522932.spot_price: "" => ""
launch_specification.3218522932.subnet_id: "" => "subnet-c7b9a7b1" (forces new resource)
launch_specification.3218522932.user_data: "" => "376162e0e0cb7583c27a7c77bbe89004db5dab87" (forces new resource)
launch_specification.3218522932.vpc_security_group_ids.#: "" => "1"
launch_specification.3218522932.vpc_security_group_ids.2814185034: "" => "sg-64ee291d"
launch_specification.3218522932.weighted_capacity: "" => ""
replace_unhealthy_instances: "false" => "false"
spot_price: "0.293" => "0.293"
spot_request_state: "active" => "<computed>"
target_capacity: "12" => "12"
valid_until: "2019-11-04T12:00:00Z" => "2019-11-04T12:00:00Z"
There is visible some mismatch in "number" launch_specification key. The remote state contains only 1125... number.
In remote state I discovered property, which should is there with wrong value:
"launch_specification.1125083684.associate_public_ip_address": "false",
That's certainly wrong. The instances are running with public ip.
I was dealing with this issue as well, but after a bit of trial and error I've landed on a configuration that works as expected:
resource "aws_spot_fleet_request" "app_web_servers" {
iam_fleet_role = "a real arn"
allocation_strategy = "diversified"
target_capacity = 2
valid_until = "2050-01-01T00:00:00Z"
replace_unhealthy_instances = true
spot_price = "0.25"
excess_capacity_termination_policy = "Default"
launch_specification {
ami = "${var.app_server_ami}"
availability_zone = "us-west-2c"
subnet_id = "${var.subnet_id}"
instance_type = "m1.small"
vpc_security_group_ids = ["${aws_security_group.app_server.id}"]
iam_instance_profile = "app-server"
associate_public_ip_address = true
key_name = "key"
}
launch_specification {
ami = "${var.app_server_ami}"
availability_zone = "us-west-2c"
subnet_id = "${var.subnet_id}"
instance_type = "m1.medium"
vpc_security_group_ids = ["${aws_security_group.app_server.id}"]
iam_instance_profile = "app-server"
associate_public_ip_address = true
key_name = "key"
}
}
I can't imagine why mine works and yours doesn't... Something causes terraform to calculate a new hash when we don't want it to, but idk wtf it is. :man_shrugging:
Edit: Thought I'd mention as well that changing the target capacity results in a spot fleet update which is pretty sweet.
Edit 2: Now it's not working again. I must be taking crazy pills.
Edit 3: Looks like there's an issue with associate_public_ip_address; with it set to true, terraform always recreates the fleet. But, when commenting it out, the fleet is not recreated; even if the existing fleet was created with the setting on. Seems like a bug.
@dashkb Thanks for debugging, it helped me, still, it took me 5 hours to fix it :D (Windows, crosscompile, ...) Still, the PR misses tests, but if you would compile it and try, it would be great :)
Fixed in #13748, thanks!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I was dealing with this issue as well, but after a bit of trial and error I've landed on a configuration that works as expected:
I can't imagine why mine works and yours doesn't... Something causes terraform to calculate a new hash when we don't want it to, but idk wtf it is. :man_shrugging:
Edit: Thought I'd mention as well that changing the target capacity results in a spot fleet update which is pretty sweet.
Edit 2: Now it's not working again. I must be taking crazy pills.
Edit 3: Looks like there's an issue with
associate_public_ip_address; with it set totrue, terraform always recreates the fleet. But, when commenting it out, the fleet is not recreated; even if the existing fleet was created with the setting on. Seems like a bug.