Azure-cli: Support creating a blank VHD in a storage account

Created on 8 Aug 2016  路  21Comments  路  Source: Azure/azure-cli

There are Kubernetes scenarios where users need to be able to provision blank VHDs in a storage account so they can make them available in their cluster.

Today, basically there are only two options as far as I can tell:

  1. Attach a new blank disk to a VM from the Portal or CLI. Immediately detach the disk. At least it's a valid VHD.
  2. Create the VHD on your local system. (qemu-img, etc)

There ought to be a way to do this from CLI (or better yet, if this were something that Storage could do via an API).

Compute Storage

Most helpful comment

One such user scenario is Pachyderm which requires a data-disk to be created and mounted to be used for their RethinkDB (https://github.com/pachyderm/pachyderm/issues/960) and having an API to create a blank data-disk would be da 馃挘 or...at least put Azure on par with AWS and GCE:

AWS: aws ec2 create-volume ...
GCE: gcloud compute disks create ...

All 21 comments

Though it is possible to use a combination of PutPageBlob and PutPageb to create a blank page blob and then append a VHD label to the blob, an Azure API support is ultimate the best idea for the following reason:

  • it is atomic operation
  • Even if VHD format changes, the API won't change.

Here's a sample piece of code: https://gist.github.com/sedouard/0468d1b2d0153ca9a652

But ideally this problem should be resolved with upcoming new Disks APIs.

One such user scenario is Pachyderm which requires a data-disk to be created and mounted to be used for their RethinkDB (https://github.com/pachyderm/pachyderm/issues/960) and having an API to create a blank data-disk would be da 馃挘 or...at least put Azure on par with AWS and GCE:

AWS: aws ec2 create-volume ...
GCE: gcloud compute disks create ...

I ended up wrapping @sedouard's workaround into a "helper" Docker image (https://github.com/jpoon/azure-create-vhd). Still would be good to have this in the Azure CLI.

I had already done that as well, for the azure-kubernetes-demo repo that shows how to use data disks: https://github.com/colemickens/azure-tools

Mine can preformat as well, and create storage account/containers if they're missing.

Is there any update on this? @colemickens docker image works good, but enabling this operation via the Azure CLI is really the way to go.

No update here. When everything moves to managed disks, there will be an easy way to do this, including CLI support. For now, the non-managed disks implementation simply has no API for disk creation.

For now, it's much easier to just use dynamic provisioning... you just create a PVC for whatever size you want and Kubernetes will make the PV for you.

@colemickens I managed to use dynamic provisioning via the _kubernetes.io/azure-disk_ Provisioner but I'm hitting the very low limit (i.e. 4) of mounted disks on my A2 VM.

Is there a way to do the same dynamic provisioning with azure-file? I.e., dynamically create files in my Storage Account and mount them on my Pods.

ta.

Dynamic provisioning for Azure Files should be in 1.7, and probably cherry-picked to 1.6.1, I think: https://github.com/kubernetes/kubernetes/pull/42170

But you probably shouldn't use it for anything serious at all. The perf is definitely not good enough for a DB or anything.

@colemickens thx for the prompt response. And yeah, I recon azure-files are SMB drives and not suitable for DBs. Wondering if it could work for a small SQLite though..

In any case, do you happen to know why is there a limitation on the number of vhds I can mount on a VM and why is it so low?

{
    "maxDataDiskCount": 4,
    "memoryInMb": 3584,
    "name": "Standard_A2",
    "numberOfCores": 2,
    "osDiskSizeInMb": 1047552,
    "resourceDiskSizeInMb": 138240
  }

That's just how Azure sells their SKUs. I'd suggest contacting support to ask them and express your interest in different SKUs with lower compute and more disks.

Thx @colemickens.

@dbalaouras here's a table that shows the various SKUs and the number of data disks that you can attach. But please do reach out to Support expressing your need for MOAR disks.

@jpoon thx. My concern is that in best case, the max number of disks is double the number of cores according to that table. That's not super useful when using Dynamic Provisioning, thus (at least) 1 vhd per Pod, cause you are then limited to 3 Pods per Node (when using A2 VMs) or 2 x cores - 1 :-/

@dbalaouras

And yeah, I recon azure-files are SMB drives and not suitable for DBs. Wondering if it could work for a small SQLite though..

I tried to use Azure File for SQLite database storage for Grafana which failed with errors about "locked" database. Adding "nobrl" to CIFS share mount options workarounds this issue, but I doubt this somehow makes SQLite correctly work on SMB share, I think data corruptions can be expected.

When everything moves to managed disks, there will be an easy way to do this, including CLI support.

Yep, try az disk create

Trying to use

az disk create

with Kubernetes like this example from @colemickens (https://github.com/colemickens/azure-kubernetes-demo/blob/master/test-azure-disk.yaml) , where I need to specify VHD url. Managed disks don't seem to have the notion of VHD URLs ? How can I use managed disks in the yaml for Kubernetes ?

I need to specify VHD url. Managed disks don't seem to have the notion of VHD URLs

I don't really know the e2e flow, but for this specific question, try az disk grant-access

@writeameer managed disk support in k8s is coming and, I think, is slated for 1.7 (https://github.com/kubernetes/kubernetes/pull/41950)

@yugangw-msft Thank you.

@jpoon Got it, thank you for the link to the issue - great insight - much appreciated.

Hi folks, can this issue be closed now since managed disk support has been integrated with azure/k8s? Let me know if I missed anything else, or there are other improvements I should made.

Was this page helpful?
0 / 5 - 0 ratings