Azure-docs: Access rights to shared file

Created on 9 Oct 2018  ·  29Comments  ·  Source: MicrosoftDocs/azure-docs

I have tried this workflow on https://docs.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files with grafana

az container create --resource-group --name --image grafana/grafana:latest --dns-name-label --ports 3000 --azure-file-volume-account-name $STORAGE_ACCOUNT --azure-file-volume-account-key $STORAGE_KEY --azure-file-volume-share-name $ACI_PERS_SHARE_NAME --azure-file-volume-mount-path /var/lib/grafana --cpu 1 --memory 1

The container keeps restarting -- probably due to grafana not able to update the files on the persistent share. How can I assure access rights are properly set on the share ? Can I connect using a policy set on the file share ?


Document Details

Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

Pri2 assigned-to-author container-instancesvc doc-enhancement triaged

Most helpful comment

Microsoft, please fix the fact that mounted Fileshare storage requires the container to run as root, as this goes against docker best practices. Further it eliminates a lot of containers that do not run as root, and need file storage.

I think the underlying issue with Grafana is the docker image does not run as root.

Fileshare seems to be mounted as root:root 777. Which is the default.

When the Grafana user tries to access the mounted storage, it will fail. You can't CHOWN the mounted storage either, I have escalated the user back to ROOT and in an init.sh scipt tried to assign permissions to Grafana user. No luck.

You do not have enough permissions inside an ACI container to mount any other fileshares either, cifs mount fails.

Proposed solution: float the ability to specify mount options, same as in Azure AKS.
In Kubernetes / AKS, you can mount a Azure fileshare, but you can specify mountoptions, like GID, UID and the default permissions.

All 29 comments

Thanks for the feedback! We are currently investigating and will update you shortly.

@gs9824 can you provide some more information around grafana? I am not familiar with this service.

@MicahMcKittrick-MSFT , Grafana is a tool to make visualise data from different sources through dashboards. The problem with running Grafana as a docker container, is the settings are not retained when restarting the docker container.
The solution for this is to store the configuration in persistent storage, so that a restart does not result in configuration loss.
I try to use a Azure file share for this and mount it in the container (/var/lib/grafana). That way the configuration is persisted.
It seems Grafana can create files within that storage (I can see that through storage explorer), but does not seem to be able to update any files

t=2018-10-09T14:20:36+0000 lvl=eror msg=“Server shutdown” logger=server reason=“Service init failed: Migration failed err: database is locked”

The database is stored on the file share.

On the file share it is possible to create access rights for users, but I don't see how I can specify these in the container creation command.

@gs9824 have you looked at using a persistent disk with AKS?

https://docs.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv

@gs9824 any update on this?

@MicahMcKittrick-MSFT , sorry not yet. I have first been trying to get the file share working by trying to execute some chmod commands during container creation. So far without luck.

@MicahMcKittrick-MSFT As far as I understand, this would involve setting up a complete AKS cluster. I just wanted to run Grafana as a simple Azure Container Instance, without modifying anything in the container.

@seanmck. @iainfoulds would either of you have any thoughts on this?

@gs9824 just FYI, I am working offline to get an answer to this. Will update you once I have more information.

@gs9824 since this is intended to be a long-running instance and seeing lots of restarts, you might just need to add a long-running command line to the az container create command, like --command-line "tail -f /dev/null".

Also, check this guide for additional troubleshooting options

https://docs.microsoft.com/en-us/azure/container-instances/container-instances-troubleshooting#container-continually-exits-and-restarts-no-long-running-process

@gs9824 any update on this?

Hi there

Coincidentally, I have exactly the same issue for exactly the same use case (Grafana).

Grafana logs the following error:
lvl=eror msg="Server shutdown" logger=server reason="Service init failed: Migration failed err: database is locked"
And as a result, the container terminates.
--command-line "tail -f /dev/null" for me just prevents the log being displayed but grafana still does not start up and the container terminates.

Starting without pointing grafana to the mount, it will use a local folder, which works. On the Azure Portal I have then tried to write folders and files on the mount manually, which works ok.

Here is a ARM template so you can reproduce the problem. Just replace some of the placeholders I added and you should be good to go.

{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "SOME_CONTAINER_GROUP_NAME": { "defaultValue": "CONTAINER_GROUP_NAME", "type": "String" } }, "variables": {}, "resources": [{ "comments": "Generalized from resource: '/subscriptions/SUBSCRIPTION_ID/resourceGroups/grafana-test/providers/Microsoft.ContainerInstance/containerGroups/CONTAINER_GROUP_NAME'.", "type": "Microsoft.ContainerInstance/containerGroups", "name": "[parameters('SOME_CONTAINER_GROUP_NAME')]", "apiVersion": "2018-04-01", "location": "westeurope", "scale": null, "properties": { "containers": [{ "name": "[parameters('SOME_CONTAINER_GROUP_NAME')]", "properties": { "image": "grafana/grafana", "command": [], "ports": [{ "protocol": "TCP", "port": 3000 },{ "protocol": "TCP", "port": 80 } ], "environmentVariables": [{ "name": "GF_INSTALL_PLUGINS", "value": "grafana-azure-monitor-datasource" },{ "name": "GF_PATHS_DATA", "value": "/mnt/grafana" } ], "resources": { "requests": { "memoryInGB": 1.5, "cpu": 1 } }, "volumeMounts": [{ "name": "grafana-storage", "mountPath": "/mnt/grafana", "readOnly": false } ] } } ], "volumes": [{ "name": "grafana-storage", "azureFile": { "shareName": "grafana-test", "readOnly": false, "storageAccountName": "SOMESTORAGE_ACCOUNT", "storageAccountKey": "SOMESTORAGE_KEY" } } ], "restartPolicy": "Always", "ipAddress": { "ports": [{ "protocol": "TCP", "port": 3000 },{ "protocol": "TCP", "port": 80 } ], "type": "Public" }, "osType": "Linux" }, "dependsOn": [] } ] }

@MicahMcKittrick-MSFT Sorry for not responding earlier -- good to know that I'm not the only one with the issue :). I have Grafana running on a VM in the meantime as a workaround (installed through Market place - Grafana from Grafana Labs). The restart most likely happens because grafana fails to start due to database locked as @orendin states.

Thanks all for the extra details.

I am not sure if this is information we can include in this doc but regardless, I will assign to the author to review and see if we can add any details.

Hi there, i have the same problem to, with prometheus this time.

same case scenario, i am using a prometheus image to create an ACI mounted with a file-share.

what i did : Script begin

echo" parameters"
ACI_PERS_RESOURCE_GROUP=ftp-group-container
ACI_PERS_LOCATION=some_location
ACI_PERS_SHARE_NAME=some_share

echo" Create the storage account with the parameters "
az storage account create \
--resource-group $ACI_PERS_RESOURCE_GROUP \
--name $ACI_PERS_STORAGE_ACCOUNT_NAME \
--location $ACI_PERS_LOCATION \
--sku Standard_LRS

echo " create file share"
az storage share create --name $ACI_PERS_SHARE_NAME --account-name $ACI_PERS_STORAGE_ACCOUNT_NAME

echo" get the storage key " STORAGE_KEY=$(az storage account keys list --resource-group $ACI_PERS_RESOURCE_GROUP --account-name $ACI_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" --output tsv)
echo "storage key : "$STORAGE_KEY

echo" create a container instance "
az container create -g some-container-group \
--name prometheus-instance \
--image prom/prometheus \
--restart-policy OnFailure \
--ports 9090 \
--dns-name-label some-label \
--ip-address public \
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name $ACI_PERS_SHARE_NAME \
--azure-file-volume-mount-path /etc/prometheus/

script end

i added a YAML file in the share-file 'prometheus.yml'

what i expected

i expected the ACI to start and read the YAML file that i created on the share-file as i mounted the /etc/prometheus on it

what realy happend

the ACI couldn't stop restarting, because it needs to get configuration from '/etc/prometheus/prometheus.yml' and the error is
err="error loading config from \"/etc/prometheus/prometheus.yml\": couldn't load configuration (--config.file=\"/etc/prometheus/prometheus.yml\"): open /etc/prometheus/prometheus.yml: permission denied

what i tried to do

i maneged to bash access on the file system to add the permissions my self,
drwx------ 2 root root 0 Nov 15 21:33 prometheus.yml
chmod a+rwx prometheus.yml
drwx------ 2 root root 0 Nov 15 21:34 prometheus.yml

as you can see even with chmod as root on the file i couldn't cahnge the permissions. so may be this is why i can't get the ACI read the YAML conf file

thanks for any help.

Hi Guys,
i fixed my problem.

what was missing

az storage share policy create --help

what i did

i added a share policy with the all the permisions (read-write-list) needed and it worked.
i have no longer the permission problem on the ACI

hope this will help

Hey there,
I'm facing the same issue while deploying neo4j image as container Instance.
Thanks for all the work around mentioned above. Tried all of them.
Problem

  • I have a file share, created for the container in same resource group. File share has to be mounted as persistent disk to the container.
  • As expected, file share has been mounted , but unable to read the data.
  • Access rights issues.
    image
    image
    image

_References_

{
    "....": [],
    "properties": {
        "containers": [
            {
                "....": [],
                "environmentVariables": [],
                "resources": {
                    "requests": {
                        "memoryInGB": 4,
                        "cpu": 1
                    }
                },
                "volumeMounts": [
                    {
                        "name": "data",
                        "mountPath": "/var/lib/neo4j/data/"
                    }
                ]
            }
        ]
    },
    "....": [],
    "volumes": [
        {
            "name": "data",
            "azureFile": {
                "shareName": "neo4jfiles",
                "storageAccountName": "neo4jcontainerstorage",
                "storageAccountKey": "<I-REVEALED-MY-SECRET-KEY>"
            }
        }
    ],
    "....": []
}

Original issue appears to be resolved. Adding @dkkapur for second if you have comments. Thanks.

Has anyone solved the problem for Grafana? Unfortunately, I still get the error

„lvl=eror msg="Server shutdown" logger=server reason="Service init failed: Migration failed err: database is locked"

I have created an "Access Policy" with all rights, but I do not know if I have to specify this somehow when mounting the volume

"resources": [
    {
      "apiVersion": "2018-10-01",
      "type": "Microsoft.ContainerInstance/containerGroups",
      "location": "[parameters('location')]",
      "name": "[parameters('containerName')]",
      "properties": {
        "containers": [
          {
            "name": "[parameters('containerName')]",
            "properties": {
              "image": "[parameters('imageName')]",
              "ports": "[parameters('ports')]",
              "environmentVariables": [
                {
                  "name": "GF_PATHS_DATA",
                  "value": "/mnt/grafana/data"
                },
                {
                  "name": "GF_PATHS_LOGS",
                  "value": "/mnt/grafana/logs"
                },
                {
                  "name": "GF_DASHBOARDS_PATH",
                  "value": "/mnt/grafana/dashboards"
                }
              ],
              "volumeMounts": [
                {
                  "name": "grafana-storage",
                  "mountPath": "/mnt/grafana",
                  "readOnly": false
                }
              ],
              "resources": {
                "requests": {
                  "cpu": "[int(parameters('numberCpuCores'))]",
                  "memoryInGB": "[float(parameters('memory'))]"
                }
              }
            }
          }
        ],
        "volumes": [
          {
            "name": "grafana-storage",
            "azureFile": {
              "shareName": "grafana",
              "readOnly": false,
              "storageAccountName": "_NAME_",
              "storageAccountKey": "_ACCESS_KEY_"
            }
          }
        ],
        "restartPolicy": "[parameters('restartPolicy')]",
        "osType": "[parameters('osType')]",
        "ipAddress": {
          "type": "[parameters('ipAddressType')]",
          "ports": "[parameters('ports')]",
          "dnsNameLabel": "[parameters('dnsNameLabel')]"
        }
      },
      "tags": {}
    }
  ]

Microsoft, please fix the fact that mounted Fileshare storage requires the container to run as root, as this goes against docker best practices. Further it eliminates a lot of containers that do not run as root, and need file storage.

I think the underlying issue with Grafana is the docker image does not run as root.

Fileshare seems to be mounted as root:root 777. Which is the default.

When the Grafana user tries to access the mounted storage, it will fail. You can't CHOWN the mounted storage either, I have escalated the user back to ROOT and in an init.sh scipt tried to assign permissions to Grafana user. No luck.

You do not have enough permissions inside an ACI container to mount any other fileshares either, cifs mount fails.

Proposed solution: float the ability to specify mount options, same as in Azure AKS.
In Kubernetes / AKS, you can mount a Azure fileshare, but you can specify mountoptions, like GID, UID and the default permissions.

@dlepow @dkkapur Any News on this Topic?

Assigning to Deep for follow-up. Thanks!

assign:@dkkapur

@dkkapur any updates?

hey folk!
finally i found this issue ... spend 10 hrs trying to make workaround

any news on it? looks like assigned more than year ago but no answer

Same happens with a standard Postgresql Docker image - if you try to make the database persistent by mounting a file share, it doesn't start.

@MicahMcKittrick-MSFT Hey Micah, any update on this?

@dkkapur Is there any plan to allow ACI container to mount azure file share with options like AKS as following?

mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
- cache=none

I think the underlying issue with Grafana is the docker image does not run as root.

Exactly. It uses a limited permission user called grafana (if not changed deliberately).

This issue is basically preventing any persistent storage binding in production because we can't rely on the health of the docker container which is exactly the reason we use Docker containers in the first place. Is there any non-elevated user enabled storage that we can attach to?

Hi,
any update on this ?

I'm facing the same issue with a postgreSQL container on which I'd like to mount a file share on /var/lib/postgresql/data to persist a small database across multiple container versions but the database doesn't has the permission to use the monted file share.

Here are the logs that are outputed on Azure Container Instance:

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
2020-08-28 07:12:06.201 UTC [83] FATAL:  data directory "/var/lib/postgresql/data" has wrong ownership
2020-08-28 07:12:06.201 UTC [83] HINT:  The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
running bootstrap script ...

And then the container stops.

I'm also quite new with deploying containers to azure, so if this isn't the intended way to do it I would be glad to learn something new.
(P.S: I just noticed that alessiostalla already mentioned that this also affects postgreSQL images)

Was this page helpful?
0 / 5 - 0 ratings