Azure-docs: 403 - ErrorCode=AuthorizationPermissionMismatch

Created on 11 Feb 2019  Â·  25Comments  Â·  Source: MicrosoftDocs/azure-docs

Hi All,

I am continuously getting the following error from line;
dbutils.fs.ls("abfss://[email protected]/")

Appreciate if you someone can help me on this.

Here is the entire error;

tatusCode=403
StatusDescription=This request is not authorized to perform this operation using this permission.
ErrorCode=AuthorizationPermissionMismatch
ErrorMessage=This request is not authorized to perform this operation using this permission.
RequestId:714f14ca-d01f-012a-2104-c20d86000000
Time:2019-02-11T12:20:08.7145914Z
at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:133)
at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:180)
at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:509)
at shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:317)
at com.databricks.backend.daemon.dbutils.FSUtils$$anonfun$ls$1.apply(DBUtilsCore.scala:83)
at com.databricks.backend.daemon.dbutils.FSUtils$$anonfun$ls$1.apply(DBUtilsCore.scala:82)
at com.databricks.backend.daemon.dbutils.FSUtils$.com$databricks$backend$daemon$dbutils$FSUtils$$withFsSafetyCheck(DBUtilsCore.scala:78)
at com.databricks.backend.daemon.dbutils.FSUtils$.ls(DBUtilsCore.scala:82)
at com.databricks.dbutils_v1.impl.DbfsUtilsImpl.ls(DbfsUtilsImpl.scala:33)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3921712348151821:7)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-3921712348151821:61)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read$$iw$$iw$$iw$$iw.<init>(command-3921712348151821:63)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read$$iw$$iw$$iw.<init>(command-3921712348151821:65)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read$$iw$$iw.<init>(command-3921712348151821:67)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read$$iw.<init>(command-3921712348151821:69)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read.<init>(command-3921712348151821:71)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read$.<init>(command-3921712348151821:75)
at line660775b1e8f7402e98201c8d2a5c61cb45.$read$.<clinit>(command-3921712348151821)
at line660775b1e8f7402e98201c8d2a5c61cb45.$eval$.$print$lzycompute(<notebook>:7)
at line660775b1e8f7402e98201c8d2a5c61cb45.$eval$.$print(<notebook>:6)
at line660775b1e8f7402e98201c8d2a5c61cb45.$eval.$print(<notebook>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:199)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:189)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:189)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:189)
at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:534)
at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:489)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:189)
at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$5.apply(DriverLocal.scala:273)
at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$5.apply(DriverLocal.scala:253)
at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:235)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:230)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:42)
at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:268)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:42)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:253)
at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:589)
at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:589)
at scala.util.Try$.apply(Try.scala:192)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:584)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:475)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:542)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:381)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:328)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:215)
at java.lang.Thread.run(Thread.java:748)


Document Details

⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

Pri2 azure-databricksvc cxp multiplsvc product-issue storagsvc triaged

Most helpful comment

Hi All,

Thanks for the update.

I was able to sort this out with Microsoft help. Issue was related to ACL settings to blob container and folders. We have to take Service Principal Object ID (Not the App-Registration Application Object ID) and grant permission to it using Azure Storage Explorer. Once it is done, all start working.

All 25 comments

@DineshPriyankara Thanks for your question. We are checking on this and will respond to you soon.

@DineshPriyankara Make sure to assign your application to the Blob Storage Contributor Role.

Please make sure your account has this role assigned and try again.

Hope this helps.

Hi All,

Thanks for the update.

I was able to sort this out with Microsoft help. Issue was related to ACL settings to blob container and folders. We have to take Service Principal Object ID (Not the App-Registration Application Object ID) and grant permission to it using Azure Storage Explorer. Once it is done, all start working.

please-close @mamccrea let's be sure the doc is clear for the resolution Dinesh found.

@DineshPriyankara We will now proceed to close this thread. If there are further questions regarding this matter, please tag me in your reply. We will gladly continue the discussion and we will reopen the issue.

Hi All,

Thanks for the update.

I was able to sort this out with Microsoft help. The issue was related to ACL settings to blob container and folders. We have to take Service Principal Object ID (Not the App-Registration Application Object ID) and grant permission to it using Azure Storage Explorer. Once it is done, all start working.

Could you please elaborate?

Just like the way we access Data Lake Gen 1, you need to set configuration with App-Registration Id (Client Id) and Secret for Data Lake Gen 2. In addition to that, you need to get the object_id of your App-Registration and give permission to each container and folder in your in you Data Lake Gen 2 using Azure Storage Explorer. Note that, this object_id is not the one you see with properties of App_Registration. You need to get the object_id using :
az ad sp show –id

Hope it is clear.

Hi, I have an urgent support request for this.
The requirement is to limit users with POSIX permission at each folder in ADLS Gen2 with Databricks. When I use Data Contributor role for the SP, from Databricks with direct access, it can do anything which is what we do not want. When I use Data Reader role then the only thing SP can do is to list, not write no matter what access we set for the objectID at the folder level. Please help ASAP!

Hello @James-tn. My understanding is that there's been a recent bug fix (https://issues.apache.org/jira/browse/HADOOP-15969) which removes the requirement to use RBAC as a prerequisite to using ACLs. This would mean that you can set ACLs on the files and folders without first assigning a role to the related service principal.

I removed the role of the app from Azure IAM.
I set ACLs in Storage Explorer with R, W, X
Then I run a simple command to list: does not work.

StatusDescription=This request is not authorized to perform this operation using this permission.

From: Norm Estabrook notifications@github.com
Sent: Thursday, February 28, 2019 12:26 PM
To: MicrosoftDocs/azure-docs azure-docs@noreply.github.com
Cc: James Nguyen James.Nguyen@microsoft.com; Mention mention@noreply.github.com
Subject: Re: [MicrosoftDocs/azure-docs] 403 - ErrorCode=AuthorizationPermissionMismatch (#24686)

Hello @james-tnhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fjames-tn&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C4a888bf875e3473c9af008d69dbafc66%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869823761414893&sdata=XnzWJjxc%2B9KYDQtVTl0DMaVrh%2BKg7ry2hiWxoM2D3WI%3D&reserved=0. My understanding is that there's been a recent bug fix (https://issues.apache.org/jira/browse/HADOOP-15969https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FHADOOP-15969&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C4a888bf875e3473c9af008d69dbafc66%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869823761414893&sdata=QMKuBdvE8bnpWaS93aC9XECCmXUSrpGq%2B77hhJ7Bzj4%3D&reserved=0) which removes the requirement to use RBAC as a prerequisite to using ACLs. This would mean that you can set ACLs on the files and folders without first assigning a role to the related service principal.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FMicrosoftDocs%2Fazure-docs%2Fissues%2F24686%23issuecomment-468425008&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C4a888bf875e3473c9af008d69dbafc66%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869823761424891&sdata=EY69t0X6y0vPv90nx9EPN9km%2FuUDO%2FCJzmEiCwYEIj0%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAYvWmhOY9pxRNtj2wvyG9Z8Zr2MA-HsXks5vSDtlgaJpZM4a0FLl&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C4a888bf875e3473c9af008d69dbafc66%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869823761424891&sdata=iinsFVeg2SQD5nPd0d06qllsyxxI2BIBuvGF%2BYr%2Freg%3D&reserved=0.

Hi @james-tn - Did you have the role assignment associated with your resource group or subscription? I ask because any changes to a parent resource group or subscription (such as adding role assignments or removing them) can take time to propagate down to the child storage account. I've recently discovered this in my own tests.

No, there was just a data reader role for the storage account assigned to the SP. That just gave list access to the SP to all folders no matter what I set at ACLs.
Then following your first response I removed the role and list does not work anymore.

James Nguyen


From: Norm Estabrook notifications@github.com
Sent: Thursday, February 28, 2019 12:44 PM
To: MicrosoftDocs/azure-docs
Cc: James Nguyen; Mention
Subject: Re: [MicrosoftDocs/azure-docs] 403 - ErrorCode=AuthorizationPermissionMismatch (#24686)

Hi @james-tnhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fjames-tn&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C370ce655b78449cb230608d69dbd86ec%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869834667493336&sdata=S1zd7AbhDlp4w%2FSVfUKhi%2FON8fcW9oDa21Qq0xjYVBY%3D&reserved=0 - Did you have the role assignment associated with your resource group or subscription? I ask because any changes to a parent resource group or subscription (such as adding role assignments or removing them) can take time to propagate down to the child storage account. I've recently discovered this in my own tests.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FMicrosoftDocs%2Fazure-docs%2Fissues%2F24686%23issuecomment-468431095&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C370ce655b78449cb230608d69dbd86ec%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869834667493336&sdata=Td%2BC6XJ3ksZJBMER0U8MxhfKu23rSDtrgcJTUnlaD98%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAYvWmk2f7JTBc0OL1-OBqNgly8kXzSJ8ks5vSD-pgaJpZM4a0FLl&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C370ce655b78449cb230608d69dbd86ec%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869834667503331&sdata=1HCS3a9IyY%2FdDL%2FKr0NZ0xNAyF57%2Fwm2ss%2BVPEQgkkg%3D&reserved=0.

Ok so it sounds like you assigned the "Storage Blob Data Reader" role to the service principal (in the scope of the storage account) and you want to provide that basic level of access as a starting point. Then, you attempted to grant ACL access to individual files and that did not work. Strange. I would have expected that to work. The order of permissions checks (when a data operation is attempted) - is to first evaluate permissions based on RBAC. Then, if the role assignment does not grant sufficient access, the ACL permissions on the affected folders or files is evaluated. Clearly this did not work which seems odd to me. The only thing I can think of is that Databricks somehow requires the "Storage Blob Data Contributor" role on the service principal (If RBAC is used). Their topic suggests that this is true here - https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html. I'll ping the databricks folks and try to get a response on that. If it turns out that this is required (as they document that it is), then the only other way I can imagine providing more targeted access is to remove RBAC (like you've done) and then use ACLs for giving the access that you need to provide.

Just to clarify, if I am not mistaken, this requires Premium Pricing Tier
for Databricks workspace, your one is Premium or Standard?

Regards
Dinesh

On Fri, Mar 1, 2019 at 2:50 AM Norm Estabrook notifications@github.com
wrote:

Ok so it sounds like you assigned the "Storage Blob Data Reader" role to
the service principal (in the scope of the storage account) and you want to
provide that basic level of access as a starting point. Then, you attempted
to grant ACL access to individual files and that did not work. Strange. I
would have expected that to work. The order of permissions checks (when a
data operation is attempted) - is to first evaluate permissions based on
RBAC. Then, if the role assignment does not grant sufficient access, the
ACL permissions on the affected folders or files is evaluated. Clearly this
did not work which seems odd to me. The only thing I can think of is that
Databricks somehow requires the "Storage Blob Data Contributor" role on the
service principal (If RBAC is used). Their topic suggests that this is true
here -
https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html.
I'll ping the databricks folks and try to get a response on that. If it
turns out that this is required (as they document that it is), then the
only other way I can imagine providing more targeted access is to remove
RBAC (like you've done) and then use ACLs for giving the access that you
need to provide.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/MicrosoftDocs/azure-docs/issues/24686#issuecomment-468442664,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AVlWaAUuCrh5IyhFU3_854PWvQ58XOYPks5vSEgVgaJpZM4a0FLl
.

Yes we are premium.
We want to have just Blob reader and rely on ACLs but it does not work.
Only read is there.

James

From: DineshPriyankara notifications@github.com
Sent: Thursday, February 28, 2019 3:37 PM
To: MicrosoftDocs/azure-docs azure-docs@noreply.github.com
Cc: James Nguyen James.Nguyen@microsoft.com; Mention mention@noreply.github.com
Subject: Re: [MicrosoftDocs/azure-docs] 403 - ErrorCode=AuthorizationPermissionMismatch (#24686)

Just to clarify, if I am not mistaken, this requires Premium Pricing Tier
for Databricks workspace, your one is Premium or Standard?

Regards
Dinesh

On Fri, Mar 1, 2019 at 2:50 AM Norm Estabrook notifications@github.com>
wrote:

Ok so it sounds like you assigned the "Storage Blob Data Reader" role to
the service principal (in the scope of the storage account) and you want to
provide that basic level of access as a starting point. Then, you attempted
to grant ACL access to individual files and that did not work. Strange. I
would have expected that to work. The order of permissions checks (when a
data operation is attempted) - is to first evaluate permissions based on
RBAC. Then, if the role assignment does not grant sufficient access, the
ACL permissions on the affected folders or files is evaluated. Clearly this
did not work which seems odd to me. The only thing I can think of is that
Databricks somehow requires the "Storage Blob Data Contributor" role on the
service principal (If RBAC is used). Their topic suggests that this is true
here -
https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html.
I'll ping the databricks folks and try to get a response on that. If it
turns out that this is required (as they document that it is), then the
only other way I can imagine providing more targeted access is to remove
RBAC (like you've done) and then use ACLs for giving the access that you
need to provide.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/MicrosoftDocs/azure-docs/issues/24686#issuecomment-468442664,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AVlWaAUuCrh5IyhFU3_854PWvQ58XOYPks5vSEgVgaJpZM4a0FLl
.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FMicrosoftDocs%2Fazure-docs%2Fissues%2F24686%23issuecomment-468483085&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C0ecd873df21c46a34f6108d69dd59557%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869938003710368&sdata=DcFfKkGbGnzIlAbCbXEtHwgt6TvDd89iMkFu2xn22TY%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAYvWmoKnjOTKDfah6Ahf1V73Gl-kiFEfks5vSGgFgaJpZM4a0FLl&data=02%7C01%7CJames.Nguyen%40microsoft.com%7C0ecd873df21c46a34f6108d69dd59557%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636869938003710368&sdata=y%2BZh0wxFcjGl6SDPK2VUaTLLDsI%2FuSb3EgWJrUqD37s%3D&reserved=0.

Ok, it turned out Databricks hasn't supported ACLs with ADLS gen2 yet.

_You must use role-based access control to access ADLS Gen2 storage using Azure Databricks. File-level access control does not work with the connector.
https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-datalake-gen2.html_

Hi there, I came across the same issue as well. Does it require Azure databricks premium to talk to Azure datalakes gen2 block storage? I've also tried to use secret and scope method in databricks to connect to the datalakes gen2 but I got an error message when creating the scope-secret on Azure databricks. The error message is 'Premium Tier is disabled in this workspace.' Based on all the comments above, I suppose I can't connect to Azure datalakes gen2 no matter what methods I used unless I have the Azure databricks premium.

Just like the way we access Data Lake Gen 1, you need to set configuration with App-Registration Id (Client Id) and Secret for Data Lake Gen 2. In addition to that, you need to get the object_id of your App-Registration and give permission to each container and folder in your in you Data Lake Gen 2 using Azure Storage Explorer. Note that, this object_id is not the one you see with properties of App_Registration. You need to get the object_id using :
az ad sp show –id

Hope it is clear.

Hi,
I have tried the same approach but I am still getting 403 error.
Could you please suggest all necessary steps.

Thanks in advance!

Ram

Update from me: I miss-observed my test between Databricks and ADLS Gen2. Totally works.
A few things people need to know:

  • Assign Blob Reader role the Application ID you use for Databricks at IAM in Storage Account setting portal
  • Use Storage Explorer to give object id (remember object id not Application id, you can get it using az ad sp show --id your_application_id) appropriate read/write/execute access to parent and sub folder. For example, you can give read access to parent folder and give write access to a particular subfolder
  • Make sure you follow Azure databricks guideline in mounting/configuring direct access of Databricks using SP to ADLS Gen2.
    This feature is very important for data security

Recently the bug fix for HADOOP 15969 has been included in Azure Databricks Runtime versions 5.x. This eliminiates the need to grant IAM-level permissions of any kind to the Service Principal. Just using ACLs works, as long as the OID of the Service Principal (and not the OID of the App registration) is used to define the permissions in the ACLs. Please see my blog post which does an end-to-end walkthrough of this with all the details covered.

Now I know there was a bug. I thought I misobserved my test. Suddenly it worked recently

I am still having the same error although i followed all the previous steps mentioned above .

I was able to resolve this by going to the app registration (after following all of the service principal name recommendations), and then in the app registration that was created go to API permission and add Azure Storage for user_impersonation, i added Azure Data Lake the same way for good measure.

Thank you @DineshPriyankara for posting this issue.
And thank you @arvindshmicrosoft for the analysis and detailed blog post about the resolution!

For anybody else who has just arrived on this issue, the key component for resolving this issue is "Microsoft Azure Storage Explorer". Once you set up this application on your computer and sign-in to your azure account within it, you can follow the steps described by @arvindshmicrosoft to resolve the issue [by granting the suitable ACLs to your Service Principal].

Also, at the time this comment was posted, there is no longer a need to assign an IAM role to your Service Principal to grant explicit access to the storage account to mount the file-system onto the Databricks root.

Lastly, if someone from Databricks is reading this, it would be of immense help if these details were added/updated to the official [azuredatabricks] documentation.

Was this page helpful?
0 / 5 - 0 ratings