It's now possible to create a storage account and set the is_hns_enabled flag to enable Data Lake Gen2 support (#2897).
As a follow on, would it be possible to add a new azurerm_storage resource type to actually create a data lake filesystem within the storage account?
Background is that I'm looking to leverage Terraform's recent support for HDInsight and want to use Data Lake Gen2 as the primary storage for Kafka / Spark.
Thanks!

@andrew-kelleher I'm also looking for this feature and will probably implement it myself. Out of interest how do you currently handle creating the FS? Just manually in the portal?
Hi @r0bnet at the moment I'm deploying the storage account natively using the azurerm_storage_account resource type and setting the is_hns_enabled flag to true.
I can then deploy an HDInsight cluster that references the storage via an ARM template embedded within the Terraform file. The advantage of this approach is that I just pass in the filesystem name I want and it will automatically create the filesystem. It's not that elegant as you have the normal limitations around embedded ARM templates but it works for us.
Oh yeah but it sounds like a better solution than manually creating it. I'm currently unsure if i can implement this feature. There is a filesystem client in the storagedatalake package but instantiating it is not as straightforward as i was hoping.
I hope @tombuildsstuff or @katbyte can assist here.
func NewFilesystemClient(xMsVersion string, accountName string) FilesystemClient
That's the function signature but not only that i'm unsure what xMsVersion is nor can i tell if there is a way to pass in the account name in the first place. If you can provide useful information please let me know.
@r0bnet just to ensure we've put yesterday's IRL discussion here: it should be possible to configure this on the Storage Account within the HDInsight Cluster, rather than within the separate Client: https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/vendor/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2018-06-01-preview/hdinsight/models.go#L1934
Potentially we could look to do the same for the Data Plane SDK here and have this as a separate resource; but that depends on the intended use-case here :)
+1 it would be great to have this feature. Is there any update or timeline? Thanks everyone!
Any update on this?
upstream PR: https://github.com/tombuildsstuff/giovanni/issues/10
Is there a best practise work around until this is fixed?
@TomLous
We create the file system by invoking az rest after terraform apply:
az rest --method put --uri "https://%STORAGE_ACCOUNT_NAME%.dfs.core.windows.net/%FILE_SYSTEM_NAME%?resource=filesystem" --resource "https://storage.azure.com"
You can also check first if it already exists:
az rest --method get --uri "https://%STORAGE_ACCOUNT_NAME%.dfs.core.windows.net?resource=account" --resource https://storage.azure.com --query "filesystems[?name=='%FILE_SYSTEM_NAME%'].name"
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
@TomLous
We create the file system by invoking az rest after terraform apply:
az rest --method put --uri "https://%STORAGE_ACCOUNT_NAME%.dfs.core.windows.net/%FILE_SYSTEM_NAME%?resource=filesystem" --resource "https://storage.azure.com"You can also check first if it already exists:
az rest --method get --uri "https://%STORAGE_ACCOUNT_NAME%.dfs.core.windows.net?resource=account" --resource https://storage.azure.com --query "filesystems[?name=='%FILE_SYSTEM_NAME%'].name"