Hi,
I have 2 subnets, (plubic and private) and a one EC2 instance with this subnets. And when I run terraform apply to remove/delete the private subnet I recivied the error message
Error applying plan:
1 error(s) occurred:
But when I run terraform destroy to destroy all my configuration I don't have this timeout
Hi @jcmartins,
Please can you provide a sample of the terraform config that you are using to create the VPC and subnets? [Please remove any access keys]. This will help us try and recreate the bug
Paul
Hi @stack72,
provider "aws" {
access_key = "xxxxxxxx"
secret_key = "xxxxxx"
region = "us-west-2"
}
resource "aws_key_pair" "ttg-dev-root-key" {
key_name = "joao"
public_key = "ssh-rsa xxxxxx"
}
resource "aws_vpc" "terraform-vpc-ttg-dev" {
cidr_block = "172.30.0.0/16"
tags {
Name = "terraform-vpc"
}
}
resource "aws_internet_gateway" "terraform-igw-ttg-dev" {
vpc_id = "${aws_vpc.terraform-vpc-ttg-dev.id}"
tags {
Name = "terraform-ttg-gw"
}
}
resource "aws_subnet" "terraform-subnet-ttg-pub" {
vpc_id = "${aws_vpc.terraform-vpc-ttg-dev.id}"
availability_zone = "us-west-2a"
cidr_block = "172.30.1.0/24"
map_public_ip_on_launch = true
tags {
Name = "terraform-subnet-ttg-pub"
}
}
resource "aws_subnet" "terraform-subnet-ttg-pvt" {
vpc_id = "${aws_vpc.terraform-vpc-ttg-dev.id}"
availability_zone = "us-west-2a"
cidr_block = "172.30.5.0/24"
tags {
Name = "terraform-subnet-ttg-pvt"
}
}
resource "aws_elb" "terraform-elb-ttg-dev" {
name = "terraform-ttg-elb"
subnets = [ "${aws_subnet.terraform-subnet-ttg-pub.id}" ]
security_groups = [ "${aws_security_group.terraform-sg-ttg-dev.id}" ]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
health_check {
healthy_threshold = 3
unhealthy_threshold = 3
interval = 15
timeout = 5
target = "HTTP:80/"
}
tags {
Name = "terraform-ttg-elb"
}
}
resource "aws_security_group" "terraform-sg-ttg-dev" {
name = "Aws-Terraform-ttg-Dev"
description = "Terraform ttg Dev SG"
vpc_id = "${aws_vpc.terraform-vpc-ttg-dev.id}"
ingress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = [ "0.0.0.0/0" ]
}
}
resource "aws_launch_configuration" "ttg-dev" {
name_prefix = "terraform-ttggroup-"
image_id = "ami-25c52345"
instance_type = "t2.small"
key_name = "${aws_key_pair.ttg-dev-root-key.key_name}"
user_data = "#!/bin/bash\napt-get update && apt-get -y --no-install-recommends install nginx && service nginx restart"
security_groups = [ "${aws_security_group.terraform-sg-ttg-dev.id}" ]
lifecycle {
create_before_destroy = true # create new LC before destroy the old one (if the config changes)
}
}
resource "aws_autoscaling_group" "ttg-dev-asg" {
name = "autoscaling-terraform-ttg-dev"
launch_configuration = "${aws_launch_configuration.ttg-dev.name}"
vpc_zone_identifier = [ "${aws_subnet.terraform-subnet-ttg-pub.id}" ]
load_balancers = [ "${aws_elb.terraform-elb-ttg-dev.name}" ]
min_size = 1
max_size = 3
desired_capacity = 1
lifecycle {
create_before_destroy = true # create new ASG before destroy the old one (if the config changes)
}
}
tks
Joao
Hi @jcmartins, 1 more thing, what version of Terraform are you using?
Terraform v0.6.11
@jcmartins, ok I have not been able to replicate this:
terraform apply
aws_subnet.terraform-subnet-ttg-pvt: Refreshing state... (ID: subnet-61c92105)
aws_key_pair.ttg-dev-root-key: Refreshing state... (ID: joao)
aws_vpc.terraform-vpc-ttg-dev: Refreshing state... (ID: vpc-507d9a34)
aws_internet_gateway.terraform-igw-ttg-dev: Refreshing state... (ID: igw-67f3ba02)
aws_subnet.terraform-subnet-ttg-pub: Refreshing state... (ID: subnet-60c92104)
aws_security_group.terraform-sg-ttg-dev: Refreshing state... (ID: sg-d7d772b0)
aws_launch_configuration.ttg-dev: Refreshing state... (ID: terraform-ttggroup-z7qerezlirblbbuibja525d4ue)
aws_elb.terraform-elb-ttg-dev: Refreshing state... (ID: terraform-ttg-elb)
aws_autoscaling_group.ttg-dev-asg: Refreshing state... (ID: autoscaling-terraform-ttg-dev)
aws_subnet.terraform-subnet-ttg-pvt: Destroying...
aws_subnet.terraform-subnet-ttg-pvt: Destruction complete
It destroyed the first time for me.
Can you paste me the output of the command that fails?
@stack72 don't do a complete destroy only remove one subnet and make a terraform apply again and you will have the uncompensated timeout error.
@jcmartins so I have followed the steps:
at this point I should get a timeout - Correct?
If so, I didn't - it readded the subnet no worries at all
terraform apply
aws_vpc.terraform-vpc-ttg-dev: Refreshing state... (ID: vpc-507d9a34)
aws_key_pair.ttg-dev-root-key: Refreshing state... (ID: joao)
aws_subnet.terraform-subnet-ttg-pub: Refreshing state... (ID: subnet-60c92104)
aws_internet_gateway.terraform-igw-ttg-dev: Refreshing state... (ID: igw-67f3ba02)
aws_security_group.terraform-sg-ttg-dev: Refreshing state... (ID: sg-d7d772b0)
aws_launch_configuration.ttg-dev: Refreshing state... (ID: terraform-ttggroup-z7qerezlirblbbuibja525d4ue)
aws_elb.terraform-elb-ttg-dev: Refreshing state... (ID: terraform-ttg-elb)
aws_autoscaling_group.ttg-dev-asg: Refreshing state... (ID: autoscaling-terraform-ttg-dev)
aws_subnet.terraform-subnet-ttg-pvt: Creating...
availability_zone: "" => "us-west-2a"
cidr_block: "" => "172.30.5.0/24"
map_public_ip_on_launch: "" => "0"
tags.#: "" => "1"
tags.Name: "" => "terraform-subnet-ttg-pvt"
vpc_id: "" => "vpc-507d9a34"
aws_subnet.terraform-subnet-ttg-pvt: Creation complete
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
I am getting a similar timeout error, but when replacing a VPC.
The previous config for the VPC was:
resource "aws_vpc" "default" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
}
And the updated config is:
resource "aws_vpc" "default" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags {
Name = "forrge"
}
}
terraform plan shows this (along with adding a number of other resources, but it doesn't get that far:
-/+ aws_vpc.default
cidr_block: "10.0.0.0/16" => "10.128.0.0/16" (forces new resource)
default_network_acl_id: "acl-c8c921ac" => "<computed>"
default_security_group_id: "sg-1761bb70" => "<computed>"
dhcp_options_id: "dopt-fb39b99e" => "<computed>"
enable_classiclink: "false" => "<computed>"
enable_dns_hostnames: "true" => "1"
enable_dns_support: "true" => "<computed>"
main_route_table_id: "rtb-6112eb05" => "<computed>"
tags.#: "0" => "1"
tags.Name: "" => "forrge"
The output of terraform apply is:
✔ cecchi-macbook ~/projects/infrastructure (master) > terraform apply
aws_key_pair.deployer: Refreshing state... (ID: deployer-key)
aws_vpc.default: Refreshing state... (ID: vpc-936582f7)
aws_internet_gateway.default: Refreshing state... (ID: igw-c0e0a9a5)
aws_vpc.default: Destroying...
Error applying plan:
1 error(s) occurred:
* aws_vpc.default: timeout while waiting for state to become '[success]'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
I am also getting similar error using 0.6.12
the cf file is located below
https://gist.github.com/felixgao/fac4de0906b3c7c75d21
aws_vpc.main: Refreshing state... (ID: vpc-f213fd96)
aws_internet_gateway.default: Refreshing state... (ID: igw-abdd8ace)
aws_security_group.default: Refreshing state... (ID: sg-2458e943)
aws_subnet.subnet.0: Refreshing state... (ID: subnet-de4ebeba)
aws_subnet.subnet.1: Refreshing state... (ID: subnet-880f1bff)
aws_subnet.subnet.2: Refreshing state... (ID: subnet-54a0830d)
aws_launch_configuration.web-lc: Refreshing state... (ID: terraform-example-lc)
aws_elb.web-elb: Creating...
availability_zones.#: "" => "<computed>"
connection_draining: "" => "0"
connection_draining_timeout: "" => "300"
dns_name: "" => "<computed>"
health_check.#: "" => "1"
health_check.0.healthy_threshold: "" => "2"
health_check.0.interval: "" => "30"
health_check.0.target: "" => "HTTP:80/"
health_check.0.timeout: "" => "3"
health_check.0.unhealthy_threshold: "" => "2"
idle_timeout: "" => "60"
instances.#: "" => "<computed>"
internal: "" => "<computed>"
listener.#: "" => "1"
listener.3057123346.instance_port: "" => "80"
listener.3057123346.instance_protocol: "" => "http"
listener.3057123346.lb_port: "" => "80"
listener.3057123346.lb_protocol: "" => "http"
listener.3057123346.ssl_certificate_id: "" => ""
name: "" => "terraform-example-elb"
security_groups.#: "" => "<computed>"
source_security_group: "" => "<computed>"
source_security_group_id: "" => "<computed>"
subnets.#: "" => "3"
subnets.1563875928: "" => "subnet-de4ebeba"
subnets.3654181985: "" => "subnet-54a0830d"
subnets.3883100686: "" => "subnet-880f1bff"
zone_id: "" => "<computed>"
aws_elb.web-elb: Creation complete
aws_autoscaling_group.web-asg: Creating...
availability_zones.#: "" => "3"
availability_zones.2050015877: "" => "us-west-2c"
availability_zones.221770259: "" => "us-west-2b"
availability_zones.2487133097: "" => "us-west-2a"
default_cooldown: "" => "<computed>"
desired_capacity: "" => "1"
force_delete: "" => "1"
health_check_grace_period: "" => "<computed>"
health_check_type: "" => "<computed>"
launch_configuration: "" => "terraform-example-lc"
load_balancers.#: "" => "1"
load_balancers.2211072046: "" => "terraform-example-elb"
max_size: "" => "2"
min_size: "" => "1"
name: "" => "terraform-example-asg"
tag.#: "" => "1"
tag.2421615522.key: "" => "Name"
tag.2421615522.propagate_at_launch: "" => "1"
tag.2421615522.value: "" => "web-asg"
vpc_zone_identifier.#: "" => "<computed>"
wait_for_capacity_timeout: "" => "10m"
Error applying plan:
1 error(s) occurred:
* aws_autoscaling_group.web-asg: timeout while waiting for state to become '[success]'
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Also receiving this error using terraform 0.6.12. In my case my initial terraform apply failed and then I made a series of changes.
+1 for this error with 0.6.11 and 0.6.12 (tried an upgrade) here. Some diag output is over here.
In our case several aws_security_group resources were changed from TF-generated names to explicit names during development. It looks like plan correctly identified what had to change in the graph but destroying the old SGs timed out. The old SGs still exist with the original names in the AWS console.
Since we're unlikely to make such a change during the normal course of development I'll probably destroy and recreate but figured I'd report it and may be able to whittle this down to a smaller test case.
Happy to provide whatever additional debugging info may be helpful and attempt to repro but don't want to pile-on. Thanks.
I'd like to suggest this is related to the timeout I get while working with "large" AWS instances; in this case, a i2.4xlarge instance. Everything works as well as it normally does with smaller instances, but the i2 types take longer than TF expects. I've seen requests for configurable timeouts in other issues. That feature would alleviate this pain.
Error applying plan:
1 error(s) occurred:
* aws_instance.viewer: Error waiting for instance (i-8c41ba0f) to terminate: timeout while waiting for state to become '[terminated]'
+1 for configurable timeouts in provider declarations. I've been getting this error quite often while using Terraform to bootstrap AWS infrastructure in the Asia-Pacific regions, and especially China. It doesn't come up for every resource, but when it does it is nearly 100% reproducible. I suspect that some of the AWS API endpoints are too slow for the timeout, but others are just fast enough.
I believe this issue is fixed with #5460
Hey all – this should be fixed in #5460, so I'm going to close this.
If you have other questions, please let us know or open a separate issue.
Thanks!
Got the same issue, but with aws_instance.
aws_instance.xxxxx-instance-nat: Error launching source instance: timeout while waiting for state to become 'success' (timeout: 15s)
My terraform version is Terraform v0.7.4 from Brew
I also got the same message as NicolasMas, when provisioning a 9 count. The instances were actually created, so it wasn't the historical "not enough capacity for instance type" errors that I have seen attributed to this issue.
Terraform v0.7.6
I also see the same behavior with v0.11.7, where instances are created in aws console, however the script fails for below error..
aws_instance.mysql_apiadmin.1: Error waiting for instance (i-xxxx) to become ready: timeout while waiting for state to become 'running' (timeout: 10m0s)
Even rerunning the plan again, get hanged after refreshing all allocated resource IDs.. the new plan is not getting generated..
Pls advise..
Regards,
Suraj
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Got the same issue, but with aws_instance.
aws_instance.xxxxx-instance-nat: Error launching source instance: timeout while waiting for state to become 'success' (timeout: 15s)My terraform version is
Terraform v0.7.4from Brew