Describe the bug
If you run the chart with tag 2019-GA-ubuntu-16.04 and image repo mcr.microsoft.com/mssql/server (mcr.microsoft.com/mssql/server:2019-GA-ubuntu-16.04) it fails.
Version of Helm and Kubernetes:
Helm:
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Kubernetes:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:02:12Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Which chart:
stable/mssql-linux
What happened:
The log shows a fail during copy:
SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
2019-11-16 22:23:17.92 Server Setup step is copying system data file 'C:\templatedata\master.mdf' to '/mssql-data/master/master.mdf'.
2019-11-16 22:23:17.99 Server ERROR: Setup FAILED copying system data file 'C:\templatedata\master.mdf' to '/mssql-data/master/master.mdf': 2(The system cannot find the file specified.)
ERROR: BootstrapSystemDataDirectories() failure (HRESULT 0x80070002)
What you expected to happen:
Chart runs normaly, as it happens when you run the 2017-latest-ubuntu tag.
How to reproduce it (as minimally and precisely as possible):
Simply run the chart with the informed tag and persistence enabled (persistence.enabled: true).
I tested it on a local cluster and it worked. So this is related to the Chart on AKS (which is where I'm getting the problem). I opened an issue there: https://github.com/Azure/AKS/issues/1319
I'm not sure this is related to AKS alone, though, as it might be a combination of a problem in the Chart and a problem in AKS.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
It is not stale.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
It is not stale.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
It's definately not stale, I'm having the same issue, but in a local cluster where volumes are pre-mounted...
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is not stale! It happens on GCP, too.
I'm deploying in Openshift and this happens when the service account that the mssql pod starts with is allowed to be root. Removing that or using a non-root service accounts works for me.
@giggio add this one to your configuration
securityContext:
enabled: true
allowPrivilegeEscalation: false
runAsUser: 1000
fsGroup: 2000
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
@giggio add this one to your configuration
securityContext:
runAsUser: 1000
fsGroup: 2000
Had to remove two fields as they are now invalid but the above got me going TYVM!!
Most helpful comment
@giggio add this one to your configuration
securityContext:
enabled: true
allowPrivilegeEscalation: false
runAsUser: 1000
fsGroup: 2000