0.7.8 (master)
Resources across multiple providers seem to be affected, including aws, azurerm and google.
I ran the TestAccAzureRMVirtualNetworkPeering_importBasic with a 10m timeout, this test has never exceeded this time in my experience.
2016/10/21 11:50:12 [TRACE] [walkValidate] Exiting eval tree: azurerm_virtual_network_peering.test1
2016/10/21 11:50:12 [DEBUG] root: eval: *terraform.EvalValidateResource
2016/10/21 11:50:17 [DEBUG] vertex provider.azurerm (close), waiting for: azurerm_virtual_network_peering.test2
2016/10/21 11:50:22 [DEBUG] vertex provider.azurerm (close), waiting for: azurerm_virtual_network_peering.test2
2016/10/21 11:50:27 [DEBUG] vertex provider.azurerm (close), waiting for: azurerm_virtual_network_peering.test2
2016/10/21 11:50:32 [DEBUG] vertex provider.azurerm (close), waiting for: azurerm_virtual_network_peering.test2
Note: the last message is repeated for the remaining 10 minutes
https://gist.github.com/pmcatominey/4361793a996b7a83e4330c14a09121ce
Test should have ran within timeout.
A panic was raised due to the test exceeding the timeout.
We noticed our builds were timing out after #9334 by @mitchellh was merged but haven't found a cause which points to any change yet.
Not sure if related, but I've been trying to converge an instance all day long and it always starts hanging at some point (a different one every time)... So basically I'm getting:
module.FOO.BAR.aws_instance.server.7: Still creating... (4m20s elapsed)
messages forever... I'm using TF v0.7.5
I'm fairly certain this is related to the large DNS DDOS going on this morning. The US east coast is was hit particularly hard.
The Azure client doesn't use a sane http transport, is missing timeouts, and doesn't even have a Dialer that times out.
@pmcatominey,
OK, I have a config which cannot make progress to complete a plan using master, which may be what you're seeing (and if it isn't, I'm appropriating this bug 😉 ). the config was sent in privately, so I'll post if I can make a contained repro case.
I'm running into this as well with an OpenStack config. I'm running this directly on the OpenStack server, which cancels out any network issues.
What's strange for me is that running the config through the acceptance test framework works fine. Running the config through a compiled Terraform binary is triggering the issue.
I've confirmed that everything works prior to #9334 being merged by building a binary at the commit before and after #9334.
Here's a gist with the relevant info: https://gist.github.com/jtopjian/3c00251105048336bc298180dc4df6a4
Please let me know if I can provide any other information including access to the OpenStack server for further testing.
Found the culprit and merging with #9525. I also added -shadow=false as a possibility on any Terraform command to disable the shadow graph to determine if that causes it.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I'm fairly certain this is related to the large DNS DDOS going on this morning. The US east coast is was hit particularly hard.
The Azure client doesn't use a sane http transport, is missing timeouts, and doesn't even have a Dialer that times out.