Terraform v0.7.6 & v0.7.7
https://www.dropbox.com/s/p7zre062k4bnaiu/main.tf?dl=0
https://gist.github.com/CloudSurgeon/075ef1a8163c126f9c0bd04e3cf631a7
AWS instance creation should not time out when in 'pending' status, or a tunable should exist to adjust this timeout value.
Terrafrom quits with "Error launching source instance: timeout while waiting for state to become 'success' (timeout: 15s)" when the instance takes longer than 15s to go from 'pending' to 'running'
Please list the steps required to reproduce the issue, for example:
provision 30 - 40 aws_instances at once. Chances are that one or two will get the aforementioned error.
Same problem here. Provisioning 20 aws_instances works, 30 fails.
Also applies to Terraform v0.7.4.
Worst thing is that Terraform loses track of that instance, so trying to destroy things like vpc and subnets doesn't work, as the vm is still there untouched.
We are provisioning 30+ instances at the same time and also run into this issue (v0.7.9). Never had this problem before we upgraded from v0.7.3.
Info about possible workarounds would be greatly appreciated, as it is we unfortunately have to revert to v0.7.3.
I think this comes from commit https://github.com/hashicorp/terraform/commit/2943a1c978137a75edbcbda2c9c1121c5219bbe2
(later on updated to 15s in ebf6e51b32ed016d6505655f301609684a50b135)
in our case the root error is AWS returning a 500 InsufficientInstanceCapacity response
We currently do not have sufficient m4.xlarge capacity in the Availability Zone you requested (us-east-1c). Our system will be working on provisioning additional capacity. You can currently get m4.xlarge capacity by not specifying an Availability Zone in your request or choosing us-east-1d, us-east-1e, us-east-1a.
it is only shown in debug mode in terraform. Increasing the timeout in https://github.com/hashicorp/terraform/blob/ebf6e51b32ed016d6505655f301609684a50b135/builtin/providers/aws/resource_aws_instance.go#L369 makes terraform eventually succeed (most of the times) but still the underlying error is never shown, just the generic timeout error
I think somewhere in https://github.com/hashicorp/terraform/blob/master/helper/resource/state.go#L85 the error is swallowed and never gets displayed
Any updates on the issue, I don't see a resolution for same
i just got it - this is a HUGE problem for us.
I use terraform v0.8.4.
Anyone knows how to overcome this issue?
I am seeing this issue on terraform 0.8.6
i think i found that this only happens if u run multiple terraform processes simultanously.
This happens for me with just running one terraform process.
On Thu, Feb 16, 2017 at 6:53 AM, OferE notifications@github.com wrote:
i think i found that this only happens if u run multiple terraform
processes simultanously.—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/9450#issuecomment-280311107,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AL8RymfsQxq6LoKQDXWgXH9vX98ed2isks5rdDjWgaJpZM4KbAme
.
Getting the same issue running with terraform 0.8.4 or 0.8.7.
Seems to be only occurring when launching a lot of instances. For example I was able to launch ~10 no problem, but when attempting to launch 60 we are getting the following for a subset of the instances. Running terraform apply over and over eventually succeeds so we have plenty of instances available to us
``` Error applying plan:
12 error(s) occurred:
Same issue here, real pain trying to automate infrastructure. Using 0.8.8. What I try to do now is run Terraform against each module individually, and sometimes it works and sometimes it doesn't.
We're having the same issue but with ELBs:
Error applying plan:
1 error(s) occurred:
Sometimes it works and sometimes it doesn't. Terraform version is 0.8.5
We had this issue here when AWS was out of capacity for the instance type we were trying to spin up.
Terraform (0.9.2) logged this:
"Error launching source instance: timeout while waiting for state to become 'success' (timeout: 15s)"
And attempting to do it in the AWS console resulted in an AWS error saying that they are out of capacity.
Expected Behavior: Terraform would inform us that AWS was out of capacity.
I am on v0.9.3 and ran into this error. There seemed to be a temporary insufficient capacity of c4.large which was resolved by waiting a few minutes and running again.
Same error here when trying to launch an instance. Any updates on this?
Any update on this?
I am using
Terraform v0.10.3
and getting the same error for just one instance
Error applying plan:
1 error(s) occurred:
+1 getting same error saparasar
It is possible to identify the underlying issue using the Terraform Debug Logs. When I had this error, in my case the issue was that AWS did not have enough capacity for the instance type I was choosing in the availability zone I configured.
A good approach to debug this error would be the following:
TF_LOG environment variable to enable Terraform Debug Logs. This can be done with the following command on Linux or Mac:export TF_LOG=DEBUG
terraform apply command.yes and hit Enter if the plan is okay for you.[DEBUG] entries a few lines above it to identify the underlying issue. To give an example, in my case I found the following:[DEBUG] plugin.terraform-provider-aws_v1.58.0_x4:
<Response>
<Errors>
<Error>
<Code>InsufficientInstanceCapacity</Code>
<Message>We currently do not have sufficient t3.medium capacity in the Availability Zone you requested (us-west-1a). Our system will be working on provisioning additional capacity. You can currently get t3.medium capacity by not specifying an
Availability Zone in your request or choosing us-west-1b.</Message>
</Error>
</Errors>
<RequestID>...</RequestID>
</Response>
I get this issue when I change gateway of macvlan docker network.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
We had this issue here when AWS was out of capacity for the instance type we were trying to spin up.
Terraform (0.9.2) logged this:
And attempting to do it in the AWS console resulted in an AWS error saying that they are out of capacity.
Expected Behavior: Terraform would inform us that AWS was out of capacity.