openstack_blockstorage_volume_v1.registry_data: Creating...
attachment.#: "" => "<computed>"
availability_zone: "" => "<computed>"
description: "" => "docker registry data store"
metadata.#: "" => "<computed>"
name: "" => "tf-registry-data"
region: "" => "RegionOne"
size: "" => "40"
volume_type: "" => "<computed>"
openstack_blockstorage_volume_v1.registry_data: Error: 1 error(s) occurred:
* Error creating OpenStack volume: Expected HTTP response code [200 201] when accessing [POST http://controller:8776/v2/f3ea7760e6864240ba963e4be7c50826/volumes], but got 202 instead
{"volume": {"status": "creating", "user_id": "2e96f8aab7aa41f0a0797c59afe3fe92", "attachments": [], "links": [{"href": "http://controller:8776/v2/f3ea7760e6864240ba963e4be7c50826/volumes/53dfb65a-a2f0-4cd1-bb19-fd6484ee85ee", "rel": "self"}, {"href": "http://controller:8776/f3ea7760e6864240ba963e4be7c50826/volumes/53dfb65a-a2f0-4cd1-bb19-fd6484ee85ee", "rel": "bookmark"}], "availability_zone": "nova", "bootable": "false", "encrypted": false, "created_at": "2015-07-22T13:54:14.683693", "description": "docker registry data store", "volume_type": null, "name": "tf-registry-data", "replication_status": "disabled", "consistencygroup_id": null, "source_volid": null, "snapshot_id": null, "multiattach": false, "metadata": {}, "id": "53dfb65a-a2f0-4cd1-bb19-fd6484ee85ee", "size": 40}}
Error applying plan:
1 error(s) occurred:
* Error creating OpenStack volume: Expected HTTP response code [200 201] when accessing [POST http://controller:8776/v2/f3ea7760e6864240ba963e4be7c50826/volumes], but got 202 instead
{"volume": {"status": "creating", "user_id": "2e96f8aab7aa41f0a0797c59afe3fe92", "attachments": [], "links": [{"href": "http://controller:8776/v2/f3ea7760e6864240ba963e4be7c50826/volumes/53dfb65a-a2f0-4cd1-bb19-fd6484ee85ee", "rel": "self"}, {"href": "http://controller:8776/f3ea7760e6864240ba963e4be7c50826/volumes/53dfb65a-a2f0-4cd1-bb19-fd6484ee85ee", "rel": "bookmark"}], "availability_zone": "nova", "bootable": "false", "encrypted": false, "created_at": "2015-07-22T13:54:14.683693", "description": "docker registry data store", "volume_type": null, "name": "tf-registry-data", "replication_status": "disabled", "consistencygroup_id": null, "source_volid": null, "snapshot_id": null, "multiattach": false, "metadata": {}, "id": "53dfb65a-a2f0-4cd1-bb19-fd6484ee85ee", "size": 40}}
Resource was:
resource "openstack_blockstorage_volume_v1" "registry_data" {
region = "RegionOne"
name = "tf-registry-data"
description = "docker registry data store"
size = 40
}
This happens also on 0.5.3
Did the volume get created? I ask because of the 202 Status in the response.
Of course, sorry for not mentioning before.
It looks like 202 it is a correct response, hence I've not found a reference in the documentation of openstack (what a news..).
Thanks!
@gionn - Looks like this is where its failing: https://github.com/rackspace/gophercloud/blob/master/openstack/blockstorage/v1/volumes/requests.go#L86
What version of OpenStack Cinder are you testing against? My guess is v2, since strangely the response code seems to have changed from 201 to 202 between v1 and v2. Thanks OpenStack....
http://developer.openstack.org/api-ref-blockstorage-v2.html#createVolume
http://developer.openstack.org/api-ref-blockstorage-v1.html#createVolume
Cinder API v2 was used: http://controller:8776/v2/f3ea7760e6864240ba963e4be7c50826.
Looks like Gophercloud only supports Cinder API v1 at the moment. Prior to adding a new openstack_blockstorage_volume_v2 resource we have to add support for Cinder API v2 to Gophercloud. I opened a feature request: https://github.com/rackspace/gophercloud/issues/449 .
you can simply modify gophercloud/blob/master/openstack/blockstorage/v1/volumes/requests.go#L86 add 202 into OkCodes, it can work ok with cinder api v2.
202 is not an expected response code for Cinder API v1 (http://developer.openstack.org/api-ref-blockstorage-v1.html#createVolume). The expected response code for Cinder API v1 is 201.
202 is an expected response code for Cinder API v2 (http://developer.openstack.org/api-ref-blockstorage-v2.html#createVolume).
Adding 202 as expected response code for Cinder API v1 is not correct.
I agree with @berendt here. The proper fix is to get Cinder V2 support into Gophercloud and then Terraform.
As a workaround, It's possible to run both Cinder V1 and V2 in OpenStack by having two catalog entries in Keystone. This must be done at the OpenStack operator's level, though.
With Kilo+ defaulting to Cinder V2, getting V2 support into Gophercloud will be a high priority.
Hello everyone. My OpenStack provider has both Cinder v1.0 and v2.0 API, but since v2.0 is default Terraform tries to hook it. How can I set up terraform to use v1.0 instead?
If your provider is providing both v1 and v2, then v1 should "just work". There might be an issue where your provider is advertising the v2 api as if it was the v1 endpoint.
You can see this by running keystone catalog which will get a list of services and endpoints that your provider is advertising.
There's a discussion on openstack-operators about Cinder v1 and v2 endpoints. The stability seems to be in flux.
http://thread.gmane.org/gmane.comp.cloud.openstack.operators/4146
I'd say getting Cinder v2 support in Gophercloud (and then Terraform) is definitely a priority; however, IMO, the v1 API should still be considered fully functional and usable at this time.
The problem is that cinder V1 does not support multiple regions deployments, hence the failure occurs when Terraform tries to attach volumes to instances. I think you should either go for the quick fix proposed by @guanglinlv, or consider to don't use Gophercloud as they are not going to support Cinder V2 soon. Many people in the community have Kilo installation with multiple region deployment, and if you don't fix this soon, you are going to loose several users.
@mcapuccini Can you give more details about the issue related to multiple regions? I'm not familiar with it. Or did you mean "endpoints"?
OK, I _think_ the region issue has been resolved (per discussion here).
While troubleshooting that issue, I now think I understand what it means when Cinder v2 will be the default from Kilo+: it only has to do with the various OpenStack components (or maybe just Nova) communicating with Cinder internally. Per various discussions on the OpenStack mailing lists, the v1 API might be in some form of deprecation, but I know of no current intention to out right _remove_ it, so OpenStack clouds can certainly continue to provide a Cinder v1 API endpoint.
And as an aside, this thread has a summary of what libraries / SDKs currently support Cinder v2.
I've just experienced the same issue while using Terraform v0.6.11 on OS X with OpenStack Kilo. The code like this:
resource "openstack_blockstorage_volume_v1" "elastic-05-vdb" {
availability_zone = "AZ1"
name = "elastic-05-vdb"
size = "1024"
}
results in following error:
openstack_blockstorage_volume_v1.elastic-05-vdb: Creating...
attachment.#: "" => "<computed>"
availability_zone: "" => "AZ1"
metadata.#: "" => "<computed>"
name: "" => "elastic-05-vdb"
region: "" => "REG"
size: "" => "1024"
volume_type: "" => "<computed>"
Error applying plan:
1 error(s) occurred:
* openstack_blockstorage_volume_v1.elastic-05-vdb: Error creating OpenStack volume: Expected HTTP response code [200 201] when accessing [POST http://foobar:8776/v2/b75763c41a684c3999c47d269489b237/volumes], but got 202 instead
{"volume": {"status": "creating", "user_id": "Rutkowski, Bartek", "attachments": [], "links": [{"href": "http://foobar:8776/v2/b75763c41a684c3999c47d269489b237/volumes/75704f45-fefc-4fdc-8723-c31c7f57f803", "rel": "self"}, {"href": "http://foobar:8776/b75763c41a684c3999c47d269489b237/volumes/75704f45-fefc-4fdc-8723-c31c7f57f803", "rel": "bookmark"}], "availability_zone": "AZ1", "bootable": "false", "encrypted": false, "created_at": "2016-02-08T17:04:28.645914", "description": null, "volume_type": null, "name": "elastic-05-vdb", "replication_status": "disabled", "consistencygroup_id": null, "source_volid": null, "snapshot_id": null, "multiattach": false, "metadata": {}, "id": "75704f45-fefc-4fdc-8723-c31c7f57f803", "size": 1024}}
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Any chance to get this fixed in some reasonable manner (that doesnt include cloning and hacking Terraform sources nor reconfiguring openstack to provide multiple Cinder endpoints, since that's outside my control)?
This is not a Terraform issue, Cinder API v2 is not implemented in Gophercloud.
While technically correct, I wouldnt say its not a Terraform issue - since because of that, Terraform doesnt support latest stable OpenStack releases, what renders it quite usless for OpenStack users.
@bartekrutkowski Confirmed. At least for OpenStack users working on current OpenStack environments. It should be mentioned that Cinder API v1 is today deprecated. I do not know how to move this issue forward without first implementing this feature in Gophercloud. A issue was opened a long time ago: rackspace/gophercloud/issues/449. Somebody has to work on this issue first before we can use Cinder API v2 in Terraform.
I agree with @bartekrutkowski that this is a significant use case issue for terraform/openstack. At $day_job, we had to use the python bindings instead of terraform for an internal service (fall of 2015) because of this limitation.
I'm not familar with the library, but go-goose appears to have cinder v2 support. https://github.com/go-goose/goose
I do not have a strong behavior to use gophercloud or to use any other Go SDK. I do not know the Terraform development processes, the change of the used SDK is a major change and I think some PTL of Terraform has to approve it first and of course somebody has to perform the change. @jtopjian do have have furthere details here?
Nothing new. Just what I've mentioned throughout this issue already.
I do sympathize with the users who'd like to use Cinder v2 with Terraform and feel it's unfortunate that the support is not available yet.
My opinion is to push to get support into Gophercloud. I feel switching to another OpenStack Go library would be more of an emotional reaction than a logical one. While it may be true that Goose supports Cinder v2, Gophercloud is much more robust and has a lot of other features, both user and developer facing. And I do not think using two OpenStack libraries is a smart decision.
Gophercloud might be slow to accept patches, but it's not a dead project.
For anyone that feels strongly about Cinder v2 support, please voice your concern (constructively, of course :smile:) at Gophercloud.
While I totally agree it is _not_ an appropriate long-term solution, I am curious about why Cinder v1 cannot be used for the time being? Is it because Cinder v1 has been turned off in your cloud or is there a v2 feature that you are trying to use? I'm asking both out of curiosity on the use-cases of v2 as well as rhetorically suggesting a _temporary_ alternative.
@jtopjian The admittedly small number of openstack deployments I have experience with, do not support cinder v1 at all but the endpoint list incorrectly reports the default cinder endpoint as being v1 while separately listing a v2 endpoint.
It's my understanding that listing a v1 (as cinder) and v2 (as cinderv2) endpoint are the correct ways to advertise Cinder. This is the method detailed in the install guide:
http://docs.openstack.org/liberty/install-guide-ubuntu/cinder-controller-install.html
I do not mean this to negate what you've seen, just to show an entirely different point of view: I work and have access to several OpenStack clouds and all of them have Cinder v1 enabled. I do find it very interesting that you've seen some which disable v1. Again, for my own curiosity, is there a reason why v1 was disabled?
I'll see if I can get an answer as to why v1 is not enabled but I'll point out that @gionn clearly has hit the same issue with the cinder endpoint incorrectly being the v2 api.
Yup, totally valid issue that's semi-separate from the absence of v2 support.
I think a long-term solution will be to create provider options where individual API endpoints can be specified for deployments that have custom/non-standard endpoints.
Is there any update about this? Supporting Cyncerv2 seems an essential requirement nowadays.
It looks like Cinder V2 is already supported in Gophercloud:
https://github.com/rackspace/gophercloud/commits/master/openstack/blockstorage/v2
I was wondering how long it would take someone to mention that Gophercloud merged in Cinder v2 support... 72 hours! 😉
It's on my list to build Block Storage v2 resources, unless someone else is able to get to them first.
Hi all,
Please see #6693. If anyone is able to, please test it out and let me know of any issues!
@jtopjian thanks! I've tested creating/destroying, works like a charm.
@Xarthisius That's great news! I'll wait and see if anyone else can verify, but I don't see an issue with getting this merged in for the next release.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Is there any update about this? Supporting Cyncerv2 seems an essential requirement nowadays.