Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
Minikube version (use minikube version): v0.14.0
Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualboxdocker -v): Docker version 1.12.0, build 8eab29eWhat happened:
I'm trying to set up a StatefulSet with minikube. It's failing, both for Consul and ZooKeeper. Repro:
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tutorials/stateful-application/zookeeper.yaml
From the Kubernetes blog entry.
Also see this issue.
What you expected to happen:
It to work.
How to reproduce it (as minimally and precisely as possible):
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tutorials/stateful-application/zookeeper.yaml
Anything else do we need to know: No.
Thanks for the report. I've repro'd this and am now going to try in a GKE cluster to see if this is minikube-specific or not.
It looks like this does work in GKE. Minikube uses a HostPath PV, while GKE uses a GCEPersistentDisk. Something in the HostPath provisioner must be leaving the permissions different.
This looks like the relevant code: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/host_path/host_path.go#L301
The hostpaths get created with 0750, and localkube runs as root so the directory is only accessible by root.
It looks like this issue: https://github.com/kubernetes/kubernetes/issues/2630
The "fsGroup" workaround doesn't work either, since HostPath doesn't support fsGroup.
We might want to open an issue on the contrib repo to stop running as the zookeeper user.
Seems like the relevant code came from this commit.
So why does GKE work then? Shouldn't it have the same security assumptions about what GID and UID are allowed to access a mounted volume inside a container?
Also:
How come HostPaths are only writable by root by design? Or is it by accident?
It works in GKE because the GCEPersistentDisk provisioner doesn't have the same limitations. If you tried to use a HostPath volume in GKE you'd have the same issue.
I agree we should try to make these work out of the box in Minikube. Here are some options I see:
I'm not sure if anyone else has other ideas.
I added a comment showcasing my ignorance, hoping to be enlightened. So I think making HostPath work similar to how docker volumes work is the best way to go forward. Also relevant:
Note that ZK runs with:
securityContext:
runAsUser: 1000
fsGroup: 1000
Thanks!
FYI, I tried out this commit and it seems to work: https://github.com/dlorenc/kubernetes/commit/824d1f0c21a11d1c259e45fd8741764a9a125870
Hopefully we can get some clarity on why this hasn't been implemented in k8s yet.
Could you PR that to kubernetes to see what they say?
@dlorenc: Could you perhaps submit a pull request so the CI builds your minikube release and i can test? I'd like to try out that commit as well regarding permissions issue on PHP's RecursiveDirectoryIterator() but im not familiar with the build process for minikube (incorporating a new kubernetes build/commit specifically)
I have successfully built k8s from your fork (which after a little research seems misguided as its normally part of the mini/localkube build process).
Im also happy to do it myself if you can direct me to any guidance or documentation but I tried this and it failed in the minikube build step after 5 which i assume was supposed to be "make". Im in slack as well if you prefer/have time to chat. Thanks!
@haf I'll submit one to k8s today.
@michaelfavia I just submitted one to minikube so we'll get a CI build. Check #959
Any updates on this? Would be great to have a minikube to run services on rather than having to bridge to a vagrant-ansible setup.
Here's the current roadmap.
Sorry, haven't heard anything on the PR yet. Hopefully we hear back this week.
Could minikube work around this while the main project considers the change?
I came across the same issues trying to run helm charts for rabbitmq: https://github.com/kubernetes/charts/issues/825
The code is trying to be ran as rabbitmq, but the permissions on the folders are reset to root.
I am trying to run zookeeper on minikube aswell.
When I use this security context I get a permission denied error:
securityContext:
runAsUser: 1000
fsGroup: 1000
But without the runAsUser it works fine:
securityContext:
fsGroup: 1000
I am running into this issue with running a jenkins instance, JENKINS_HOME path needs to be persistent, neither fsGroup on its own nor runAsUser and fsGroup combined is helping.
I run into the same with Jenkins.
After ignoreing this for a while i had to come back to it and figure it out. You have to enable "standard" class storeage
minikube addon enable default-storageclass
Then I can get the helm chart for jenkins to work by adding to the values.yaml:
Persistence:
StorageClass: standard
You can probably dig into the chart to see why this works. i haven't yet.
I got this working by making a derivative of the official Jenkins Docker image that uses root instead of UID 1000. It's the only way I could get past the permissions errors with the minikube hostpath provisioner. Dockerfile looks like this:
FROM jenkins:2.46.2-alpine
# Run as root to fix permission errors in Minikube
USER root
# Make root the owner of all files
RUN chown -R root "$JENKINS_HOME" /usr/share/jenkins/ref
I am having a similar problem with mysql and trying to do a volume hostPath to /data. So, is the answer here to run all our docker containers as root?
Minikube has it's own hostpath provisioner now that should create directories in a more-writeable location :)
could you attach the output of:
kubectl get storageclass
kubectl get pv
kubectl describe $pod
for the pod you configured with the volume?
kubectl get storageclass
NAME TYPE
standard (default) k8s.io/minikube-hostpath
kubectl get pv
No resources found.
md5-9f599333c153fa3752baafce169213e4
kubectl describe pod percona-3960628528-tgxfq --namespace=db-system
Name: percona-3960628528-tgxfq
Namespace: db-system
Node: minikube/192.168.99.100
Start Time: Tue, 09 May 2017 14:26:31 -0600
Labels: pod-template-hash=3960628528
resource=percona
system=db-system
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"db-system","name":"percona-3960628528","uid":"c9e94942-34f5-11e7-a1ac-080027...
Status: Running
IP: 172.17.0.4
Controllers: ReplicaSet/percona-3960628528
Containers:
percona:
Container ID: docker://39c171346d7f5e7906bcaa5d2b1e426003bdd38728a917339d1e2f19c622f820
Image: percona:5.6
Image ID: docker://sha256:1dd4c069d69a7f324be8e635813af62db42348a58e47bde5631522f487419731
Port: 3306/TCP
State: Running
Started: Tue, 09 May 2017 15:18:10 -0600
Last State: Terminated
Reason: Error
Exit Code: 141
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Tue, 09 May 2017 15:13:03 -0600
Ready: True
Restart Count: 15
Environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_OPS_USER: user
MYSQL_OPS_PASSWORD: password
MYSQL_APP_USER: user
MYSQL_APP_PASSWORD: password
Mounts:
/etc/mysql/conf.d from conf (rw)
/var/lib/mysql from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zpxdw (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
data:
Type: HostPath (bare host directory volume)
Path: /data/db/mysql
conf:
Type: HostPath (bare host directory volume)
Path: /data/db/mysql-conf
default-token-zpxdw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zpxdw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
md5-5f73e4e63e2af3843d11aa9bf96c732f
spec: {
containers: [
{
name: 'percona',
image: 'percona:5.6',
imagePullPolicy: 'Always',
env: [
{
name: 'MYSQL_ROOT_PASSWORD',
value: secrets['system-mysql-root-password'],
},
{
name: 'MYSQL_OPS_USER',
value: variables['system-mysql-ops-user'],
},
{
name: 'MYSQL_OPS_PASSWORD',
value: secrets['system-mysql-ops-password'],
},
{
name: 'MYSQL_APP_USER',
value: variables['system-mysql-app-user'],
},
{
name: 'MYSQL_APP_PASSWORD',
value: secrets['system-mysql-app-password'],
},
],
ports: [
{
containerPort: 3306,
protocol: 'TCP',
},
],
volumeMounts: [
{ name: 'data', mountPath: '/var/lib/mysql' },
{ name: 'conf', mountPath: '/etc/mysql/conf.d' },
],
},
],
volumes: [
{ name: 'data', hostPath: { path: '/data/db/mysql' } },
{ name: 'conf', hostPath: { path: '/data/db/mysql-conf' } },
],
},
@lukeab This gave me a good lead, I'm not sure why the storageClassName needed to be manually specified rather than defaulted properly to standard I'm not sure if this ticket is related https://github.com/kubernetes/minikube/issues/1239#issuecomment-300853084 ?
@dlorenc Any ideas?
We fixed this by changing the zookeeper demo to stop using the alpha storage annotation. It should work now at master.
@siwyd's https://github.com/kubernetes/minikube/issues/956#issuecomment-299633571 just saved my whole day. This was a very difficult problem to understand. Perhaps minikube should just ship with the default storage class already enabled?
Most helpful comment
I am trying to run zookeeper on minikube aswell.
When I use this security context I get a permission denied error:
But without the
runAsUserit works fine: