Terraform-provider-azurerm: Managed SQL database instance

Created on 9 Aug 2018  路  44Comments  路  Source: terraform-providers/terraform-provider-azurerm

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

It would be nice if we have a terraform feature for Azure SQL Managed database instance.

new-resource servicmssql upstream-microsoft

Most helpful comment

@theharleyquin not at this time - as @mb290 has mentioned the first Managed Instance in a Subnet can take around 6 hours to provision (up to a maximum of 30h) - which makes developing support for this fairly challenging, unfortunately.

I had a chat yesterday with an Azure representative and he promise me that first deployment takes around 35 minutes now. They are selling Managed SQL instances as the way to go for SQL data bases, this Terraform resource is highly needed.

All 44 comments

i second this! :)

Is there anyone working on this or can I start writing a PR for the functionality?

@jeffreyCline have you started looking into this yet? If not I don't think anybody's working on this from HashiCorp/Microsoft's side, so you should be good @mb290 馃憤 (thanks in advance 馃槃)

@tombuildsstuff maybe I've bitten off much more than I can chew! The RestAPI documentation is available here and relatively straight forward, however i'm really struggling to find a concise list of the options for SKUs: https://docs.microsoft.com/en-us/rest/api/sql/managedinstances/createorupdate

There are also lots of dependencies on the VNet that the subnet is in, so i'm trying to figure out the best way to either a) document or b) template this. https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-vnet-configuration#requirements

The final challenge is for the first SQL MI in a Subnet MS take up to 6 hours to provision so TF is going to require a very large timeout.

Bump - just checking in to see if there is any movement on this.

@theharleyquin not at this time - as @mb290 has mentioned the first Managed Instance in a Subnet can take around 6 hours to provision (up to a maximum of 30h) - which makes developing support for this fairly challenging, unfortunately.

It seems that we don't have news from Hashicorp on this one.

Hi,
We do have requirement where we have to build the customize terraform template for Azure SQL managed instance. Is there is any terraform template available for this?

@theharleyquin not at this time - as @mb290 has mentioned the first Managed Instance in a Subnet can take around 6 hours to provision (up to a maximum of 30h) - which makes developing support for this fairly challenging, unfortunately.

I had a chat yesterday with an Azure representative and he promise me that first deployment takes around 35 minutes now. They are selling Managed SQL instances as the way to go for SQL data bases, this Terraform resource is highly needed.

Currently handling MI db creation with azurerm inline script task within a Azure DevOps release pipeline. All other tasks for IaC uses terraform tasks. Using both task types mean that I have to store release variables in the DevOps variable group and .tfvars. Would be nice to do away with the DevOps variable group and only use the .tfvars but I can't do that until terraform supports MI db creation.

YAML for the DevOps task.

steps:

  • task: AzurePowerShell@3
    displayName: 'Azure PowerShell script to create new Azure SQL MI database on existing MI'
    inputs:
    azureSubscription: 'name
    ScriptType: InlineScript
    Inline: |
    New-AzureRmResource -Location $(location) -ResourceId "/subscriptions/$(subscriptionId)/resourceGroups/$(sqlMIrg)/providers/Microsoft.Sql/managedInstances/$(sqlMIname)/databases/$(sqlMIdatabasename)"
    -ApiVersion "2017-03-01-preview" `
    -AsJob -Force

Big fan of this capability!

is there something now in pipeline is that maybe available with terraform 2.x? need it urgent

Any luck with this so far?

Well, I can at least say "Happy Birthday Issue"!

Please refer to https://registry.terraform.io/modules/doherty100/network-sqlmi/azurerm/1.0.1?tab=inputs

Cheerio!

@a138076 as mentioned in this comment (in the PR #2727) unfortunately this requires a change coming in 2.0 (custom timeouts) - as such support for this is currently blocked on v2.0 (which we're working on the dependencies for at the moment)

@tombuildsstuff; Do we have Terraform support for adding databases to the Managed instance?

Since the Managed Instance creation itself takes few hours, most of the time the Manged Instance is created and Automation is used to Add/ Delete database to the existing managed instance.
It will be nice if we can get resources to add/delete to existing Managed Instance.

@RavulaChetan unfortunately that's dependent on support for provisioning SQL Managed Instances being available. As part of building support for each resource we add Acceptance Tests which cover provisioning and then destroying the resource and all of it's dependencies - meaning in this case we'd provision a Resource Group, then a SQL Managed Instance and then the DB within that.


To cross-post an update from #2807 - from our side we've started working on 2.0 and are gradually implementing this functionality using Feature Toggles - in particular at the moment we're adding support for Custom Timeouts and the new Virtual Machines / Virtual Machine Scale Set resources outlined in #2807.

Whilst these features are still being developed these Feature Toggles are turned off and unsupported - but once they're further along we'll publish information on how you can use these features in 1.x release prior to them becoming available in 2.0.

This is relevant since we believe that it should be possible to feature toggle support for SQL Managed Instance too in the same "2.0 new features" feature toggle - and we're hoping to look into that once support for the VM/VMSS resources has been finished (which we're working on at the moment).

Thanks!

Hi , any update on this ?

Hi,

I've noticed that #5399 got "Blocked" milestone assigned to it. Any information about what is exactly is blocking it, or any kind of other progress information? Eagerly waiting on this one. :)

Is this targeted for the v2 release or should this be expected to be delayed into later March? Im making plans for a project using a sql managed instance and would like to know if I should look elsewhere

Has there been any movement on this? Curious to see / play with this for building managed instance SQL

馃憢 to give an update here:

5399 includes initial support for the SQL Managed Instance resource (and is based on #2727) - however unfortunately during development we've encountered some issues which require discussion with the Service Team - as such unfortunately this issue/PR is currently blocked on an issue in the API which we're in contact with the Service Team about.

Once we've heard back from the service team here / got a path forward we'll post another update - however unfortunately at the moment we're blocked and as such I'm going to assign this to the "Blocked" milestone for the moment.

Thanks!

Thanks for the update @tombuildsstuff. I know there's a lot of work going on in the background to improve provisioning and the APIs around this resource, so this work getting blocked makes sense.

Any update on this? Last we heard you're waiting on the services team, but that was over a month ago. I would love to be able to provision SQL Managed instances/databases

Unfortunately, we're still waiting on the service team to make the changes we need before we can move forward with this.

So if i understand correctly, currently azurerm provider doesn't support azurerm_sql_managed_instance. If the work is completed and terraform supports managed instance, i hope someone will update here?

@muralidar44 - correct it is still not supported. We are waiting on the service team to fix an issue with deletion of MI's.

@katbyte Do you know if there was any progress with this upstream? Is there anything we could help check / test from the outside?

I second this @katbyte

@theharleyquin not at this time - as @mb290 has mentioned the first Managed Instance in a Subnet can take around 6 hours to provision (up to a maximum of 30h) - which makes developing support for this fairly challenging, unfortunately.

I had a chat yesterday with an Azure representative and he promise me that first deployment takes around 35 minutes now. They are selling Managed SQL instances as the way to go for SQL data bases, this Terraform resource is highly needed.

Nope. :(

Duration: 3 hours 9 minutes 53 seconds

And a second instance took almost exactly the same amount of time to deploy (these times were in Aus East and Aus South East regions respectively):

Duration: 3 hours 9 minutes 49 seconds

I believe the Azure rep may have been talking about (or getting deployment timings confused with) instance pools:

https://docs.microsoft.com/en-gb/azure/azure-sql/managed-instance/instance-pools-configure#move-an-existing-single-instance-inside-an-instance-pool

In that article it mentions that:

"To deploy a managed instance into an instance pool, you must first deploy the instance pool, which is a one-time long-running operation where the duration is the same as deploying a single instance created in an empty subnet. After that, you can deploy a managed instance into the pool, which is a relatively fast operation that typically takes up to five minutes."

" Important
Deploying an instance pool is a long running operation that takes approximately 4.5 hours."

Last time I deployed a SQL Managed Instance, June 6, it took six hours in westeurope. Once you have the cluster in place and if you deploy another managed instances to the same cluster its true that the process is much faster (minutes). I see another added problem to the managed instance deployment using Terraform, it deploys routes and NSG rules along with the cluster although I instructed it not to do it as per documentation, at the end of the process all those rules and routes will not be under Terraform management.

@bogdangrigg & @rocketingineer - still waiting on the service team

Hi all, I actually opened a ticket over at the Microsoft support and asked about this issue and they told me theres no issue on the MS service team side:

[...]
There is no issue related to deployment of MI instance at this moment. 
This is an expected behavior for the first deployment to take a few hours and this it will take the same amount of time if you are choosing to deploy through portal, ARM template, Terraform, etc..

The issue, as you said in the previous mail, is in the fact that Terraform, as far as I know, will have a timeout error related to the fact that it will take a while to deploy the MI instance, but this have to be discussed with hashicorp team, because we cannot offer you support for a third party solution.
[...]

So i'm a bit confused. Is the problem the deployment time of 5+ h for the fist MI in a subscription or is ist there another MS internal issue regarding MI deployment?

@TheKangaroo reading the reply from support I believe that's implying "the API is currently working" (e.g. green on the status page) which is unrelated and unfortunately we're still blocked for the reasons outlined above.

There's an internal thread with the product/engineering teams tracking this (which support won't have access too) - but afaik at this point we're waiting on the service team to make some changes to the API to support this. Once that's done we should be able to take another look here 馃憤

Ah thank you for the clarification @tombuildsstuff :)

Upvoted since we are using SQL MI and really need this feature!

hi @katbyte / @tombuildsstuff ,

Just a quick question. Currently as per my understanding, Managed instance in TF is not supported yet because of below items:
1) create or update takes 3+ hours - can be fixed with custom timeouts
2) If implemented, acceptance tests fails because we can not delete virtual cluster programmatically

for option2, if there is any possibility to get Virtual cluster name from the managed instance GET endpoint, use Virtual cluster endpoints to check if it has any other managed instances and delete the virtual cluster if the current managed instance was the last one in it and we handle all this in Managed instance delete method, would that be enough to accept managed instance PR until a more permanent solution for virtual cluster is available ?

Asking this because SQL prod team is ready to add extra field for virtual cluster name in get mi rest endpoint and we handle the VC delete logic ourselves in azurerm provider (if the VC doesnt has any other managed instance or pools in it) if and only if Hashicorp team accepts the temp solution I mentioned above.

Let me know if that is enough to get my PR for MI get accepted.

Hey @PriyankaRanganath, thanks for taking the time to right out a detailed response. I will say that the first item on long creation time isn't so much of an issue but the second item is definitely the hangup for us.

Unfortunately, while your proposed solution looks straightforward on paper, it causes quite a few issues in implementation. The first that comes to mind is what happens if the virtual cluster fails to delete? The managed instance is gone and now we don't have a way to get to that virtual cluster. We've left the user stranded because of a resource that they didn't even know was created. Since the virtual cluster is the concern of the service, when the last database is gone, it should be safe to remove the subnet; so perhaps the subnet should be de-associated from the virtual cluster during deletion of the db.

Terraform users expect Terraform to manage their infrastructure end to end but the virtual cluster that's created outside Terraform breaks that model. I wish we could work around that in Terraform but there aren't any good solutions that we'd be comfortable supporting other than having the service team handle the virtual cluster end to end in a timely manner or to let Terraform handle the virtual cluster end to end.

Hi @mbfrahry ,

Thanks for the quick response.
I have a question regarding failure in VC deletion issue.
IMO, if we implement proposed solution above, VC deletion should fail only if during deletion of last MI(before VC deletion starts) someone sends a create MI request for that same VC. In all other cases, the VC drop should be successful(if it is empty). Else we can implement retry logic until it is successful.
Correct me if I a wrong.

Would that temp solution work?

@PriyankaRanganath the API either needs to expose the VC so that it's manageable by third parties, or delete ths whilst the last MI is being deleted, so I don't believe this approach changes anything unfortunately?

Hi @tombuildsstuff ,

If the product team is ready to expose the VC in API endpoint(as well as in Go SDK) so that it could be managed by 3rd parties,
would you consider the temp solution I mentioned above ?

As I mentioned previously, SQL product team is ready to expose VC details in the API if and only if Hashicorp team accepts my temp proposed solution.

@PriyankaRanganath unfortunately whilst the proposed temporary solution may work for the tests in that example, in practice that's not sufficient. To go into a few of these scenarios:

  1. A user creates a SQL managed instance from scratch and tears it down (8h)
  2. A user creates/has an existing a SQL managed instance, being the sole Managed Instance on the Virtual Cluster - attempting to make a change to this which requires recreating (deleting and immediately recreating) this - for example if the collation needs to be changed - which will take an entire workday (8h) to replace due to deleting the MI which deletes the VC (~4h) and the subsequently recreating them both (another ~4h).
  3. Creating a MI within an existing VC - takes considerably less time and the tainting issue isn't so much of a problem.
  4. Organizations want different teams/users to deploy multiple Managed Instances (which end up on the same Virtual Cluster) - from a code perspective these would be configured as different Terraform Modules (e.g. Deployments) referencing the same cluster. Today whilst the API exposes the Virtual Cluster via the Delete, Get and Update methods - there's no way to create one - so how would users achieve this? Whilst we could determine that no other MI's are deployed to the VC and clean this up automatically - there's no guarantee that we have permission to do so in ARM, so what happens when this fails?
  5. There's a larger issue when we think about other resources that need a MI - if each test takes 8h to run, and say there's 50 tests - even provisioning 20 in parallel is going to take 20 hours for those tests to run; ultimately whilst the Virtual Cluster management is one issue - this would be the slowest test suite across the entire Azure Provider. Whilst this isn't an immediate issue, it's a blocker for us if it prevents us adding further Managed Instance functionality long-term.

Whilst the solution you've proposed above may fix the first use-case I've mentioned, unfortunately it doesn't work for the rest.

We've spent a little time investigating how this API works, and it appears the following Payload is available for Get-ing and Updating Virtual Cluster:

{
    "properties": {
        "subnetId": "subnetID"
    },
    "location": "eastus",
    "name": "randomName"
}

Since the Delete, Get and Update API's are available - meaning Virtual clusters can already be managed by users; and the payload appears to be quite simple - we believe it should be possible to expose the Create API without exposing any further implementation details?

This allows for the longer initial provisioning time of the Virtual Cluster to be provisioned one time in these scenarios, and referenced as necessary - enabling users to share this as necessary. The Azure API here could then provision one if a user doesn't specify the ID of an existing Managed Cluster (as is the case in AKS and some other Azure API's). This approach would both allow users to provision/reference existing clusters (which would allow this to work in Terraform) - and have the existing approach working of provisioning a Virtual Cluster behind the scenes.

Alternatively Azure could automatically manage the provisioning/deletion of the Virtual Cluster /during/ the deletion of the last Managed Instance within that API - however that would again cause extended deletions when "tainting" (deleting and immediately recreating) the last SQL Managed Instance within a Virtual Cluster.

Based on the use-cases above - I don't see how it'd be possible to use this resource without Terraform being able to manage the lifecycle of the Virtual Cluster. It's worth noting from a implementation perspective, since the Delete, Get and Update API calls for a Virtual Cluster are already exposed - this doesn't expose any more implementation details of the Virtual Cluster. It's also possible to do this in a backwards compatible manner - where a user either specifies the ID of an existing Virtual Cluster to provision the Managed Instance within it - or (if they omit this field) then the Azure API can spin up the Virtual Cluster behind the scenes as is done today (and the automatically clean this up as needed).

As such is it possible to expose the Create API call here too?


cc @JeffreyRichter - since this seems extremely curious from an an ARM perspective that a Resource can be retrieved, updated and deleted - but not provisioned; which feels like an anti-pattern?

Was this page helpful?
0 / 5 - 0 ratings