Kubeadm: SElinux experiences for who wants to know

Created on 10 Jan 2017  Â·  85Comments  Â·  Source: kubernetes/kubeadm

Hi all,

So been battling with my special setup all weekend, but I have to give up and let other folks look at it.
So on windows, vagrant and ansible, I can share my stuff if you like.

Facts:
latest version from repo in the guide.
using fedora 24 as images in virtualbox with vagrant

So following the guide, I got it working, with setenforce permissive, kubeadm runs with calico. In my case I have a small machine for running ansible, setting up a master and a node.

First I tried with setenforce enforcing, and got stuck at the famous, "waiting for the control plane to become ready" , and then used new terminal to look at it. It seems that actually etcd is clashing with selinux due to diff types. In the static manifest, in the code in kubernetes/cmd/kubeadm/app/master/manifest.go, I can see that it's run with type spc_t, while everything run from docker, etcd itself is run under svirt_sandbox_file_t, so yes it clashes. I think it has to to do with the volumes attached or the process actually trying to write to the volume on the master host.

I see similar problems with the pod networking scripts.

So anyone any ideas?

Just trying to get this working with SElinux from the get go ;)

Thanks,

Most helpful comment

Yes @eparis is correct. I have to get a patch to docker to handle the labeling correctly. We need to allow user flexibility (IE The ability to break his system if he so chooses.)

The default case should be that all container processes sharing the same IPC namespace have share the same SELinux label, but if a user asks to override the label, it should be allowed.

All 85 comments

@dgoodwin

@coeki I don't think we technically document Fedora installations, I assume you used the Centos7 repos? Which docker did you use, the one in Fedora or one from Docker Inc? Could you include details on the denials?

spc_t should be correct per http://danwalsh.livejournal.com/2016/10/03/

@jasonbrooks do you have any thoughts on this, potential fallout from https://github.com/kubernetes/kubernetes/pull/37327

I will try to find some time to see if I can reproduce this week.

Reproduced:

type=AVC msg=audit(1484057408.459:2715): avc:  denied  { entrypoint } for  pid=16812 comm="exe" path="/usr/local/bin/etcd" dev="dm-6" ino=4194436 scontext=system_u:system_r:spc_t:s0:c133,c303 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c133,c303 tclass=file permissive=0

Does not happen on CentOS 7 as far as I know.

Hi,

Yes, that are the type of errors I'm seeing. For clarification, yes I used the Centos repos and I used the docker from Fedoras repo.

I will do a quick check if this happens with Centos7 later today.

Thanks,

Hi,

Actually happens on centos7 too:

type=AVC msg=audit(1484065309.021:634): avc: denied { entrypoint } for pid=12204 comm="exe" path="/usr/local/bin/etcd" dev="dm-9" ino=4194436 scontext=system_u:system_r:spc_t:s0:c390,c679 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c390,c679 tclass=file
type=AVC msg=audit(1484065310.113:637): avc: denied { entrypoint } for pid=12263 comm="exe" path="/usr/local/bin/etcd" dev="dm-9" ino=4194436 scontext=system_u:system_r:spc_t:s0:c220,c274 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c220,c274 tclass=file
type=AVC msg=audit(1484065323.851:661): avc: denied { entrypoint } for pid=12550 comm="exe" path="/usr/local/bin/etcd" dev="dm-9" ino=4194436 scontext=system_u:system_r:spc_t:s0:c425,c863 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c425,c863 tclass=file

Regarding Dan Walsh's post, I also saw this http://danwalsh.livejournal.com/75011.html, so I think stuff changed.

Further things I'm seeing:

[root@localhost vagrant]# ls -Z /var/lib/ |grep container
drwx-----x. root root system_u:object_r:container_var_lib_t:s0 docker
drwxr-xr-x. root root system_u:object_r:container_var_lib_t:s0 etcd
[root@localhost vagrant]# ls -Z /var/lib/ |grep kubelet
drwxr-x---. root root system_u:object_r:var_lib_t:s0 kubelet

[root@localhost vagrant]# ls -Z /var/lib/kubelet/pods/3a26566bb004c61cd05382212e3f978f/containers/etcd/
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0:c425,c863 00cb813c
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0:c220,c274 066b8a86
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0:c390,c679 0c8e84af
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0:c12,c477 342bd480
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0:c215,c768 995f6946
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0:c23,c405 e184aa90
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0:c65,c469 eb05320c

The same on Centos and Fedora. Not sure what the way to go is, but I think we don't need to specify SElinux types anymore in any manifests. At least for kubeadm, the pod networking is another story.

Thoughts?

Thanks,

I'm poking at this now.

Thanks @jasonbrooks.

FWIW CentOS7 was clean for me with:

(root@centos1 ~) $ kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.json", assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf]
(root@centos1 ~) $ getenforce
Enforcing
(root@centos1 ~) $ rpm -qa | grep kube
kubelet-1.5.1-0.x86_64
kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64
kubectl-1.5.1-0.x86_64
kubernetes-cni-0.3.0.1-0.07a8a2.x86_64
(root@centos1 ~) $ ls -lZ /var/lib | grep docker
drwx-----x. root    root    system_u:object_r:docker_var_lib_t:s0 docker/
drwxr-xr-x. root    root    system_u:object_r:docker_var_lib_t:s0 etcd/
drwxr-xr-x. root    root    system_u:object_r:docker_var_lib_t:s0 kubeadm-etcd/
(root@centos1 ~) $ rpm -qa | grep selinux
selinux-policy-targeted-3.13.1-60.el7_2.3.noarch
libselinux-2.2.2-6.el7.x86_64
docker-selinux-1.10.3-46.el7.centos.10.x86_64
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-60.el7_2.3.noarch

kubeadm init was working and this was the env I was doing my testing with to verify things were ok.

I then thought to check if my vagrant machines were running latest policy and that picked up a bunch of selinux updates, after which kubeadm init now fails with the denial provided by @coeki. So something has gone wrong with the latest policy. After my update:

(root@centos1 ~) $ rpm -qa | grep selinux
libselinux-2.5-6.el7.x86_64
docker-selinux-1.10.3-46.el7.centos.14.x86_64
selinux-policy-targeted-3.13.1-102.el7_3.7.noarch
libselinux-python-2.5-6.el7.x86_64
selinux-policy-3.13.1-102.el7_3.7.noarch
libselinux-utils-2.5-6.el7.x86_64
container-selinux-1.10.3-59.el7.centos.x86_64

Look at this. CentOS 7 w/ docker-selinux:

$ sesearch -T -s docker_t | grep spc_t
   type_transition docker_t docker_share_t : process spc_t; 
   type_transition docker_t unlabeled_t : process spc_t; 
   type_transition docker_t docker_var_lib_t : process spc_t;

And after installing container-selinux:

$ sesearch -T -s docker_t | grep spc_t
   type_transition container_runtime_t container_var_lib_t : process spc_t; 
   type_transition container_runtime_t container_share_t : process spc_t;

Yep, I added exclude=container-selinux to my fedora-updates repo and the kubeadm init completes as expected.

ping @rhatdan

So we should document exclude=container-selinux as a solution to having SELinux on (setenforce 1) and running kubeadm?

I'm thinking we'll need an selinux fix to address this.

On Jan 10, 2017 22:41, "Lucas Käldström" notifications@github.com wrote:

So we should document exclude=container-selinux as a solution to having
SELinux on (setenforce 1) and running kubeadm?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubeadm/issues/107#issuecomment-271791032,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAG52vBqKyMAcSj62oGqazI2hWETd-RPks5rRHmZgaJpZM4Le4yP
.

After reading mentioned post from Dan Walsh ( http://danwalsh.livejournal.com/75011.html) a little better, either there's en error in typealiasing the docker_t types to the new generic name for container types container_t, or it didn't get pulled yet.

Another train of thought. I'm not sure how docker containers are launched by kubernetes, but a fix could also be to launch the containers with the new container types.

Ingnore the last remark, it's the OS contianer runtime of course, so yes @jasonbrooks remark is right, we need a selinux fix.

@lsm5 can we get an updated version of container-selinux for Fedora 24. Seems to be a little out of date there.

@rhatdan @lsm5 it's bad in latest CentOS 7 as well

What version of docker, docker-selinux, container-selinux does Centos 7 have?

From above:

(root@centos1 ~) $ rpm -qa | grep selinux
libselinux-2.5-6.el7.x86_64
docker-selinux-1.10.3-46.el7.centos.14.x86_64
selinux-policy-targeted-3.13.1-102.el7_3.7.noarch
libselinux-python-2.5-6.el7.x86_64
selinux-policy-3.13.1-102.el7_3.7.noarch
libselinux-utils-2.5-6.el7.x86_64
container-selinux-1.10.3-59.el7.centos.x86_64

In my comment above you can also see the previous versions I had where everything was working, and then it starts failing after a yum update to these versions.

@dgoodwin can you try the latest container-selinux for CentOS 7? You can get it using this repo:

[virt7-docker-common-candidate]
name=virt7-docker-common-candidate
baseurl=https://cbs.centos.org/repos/virt7-docker-common-candidate/x86_64/os/
enabled=1
gpgcheck=0

See: https://wiki.centos.org/Cloud/Docker

Still failing @lsm5

(root@centos1 ~) $ rpm -qa | grep selinux                                                                                                     
libselinux-2.5-6.el7.x86_64
selinux-policy-targeted-3.13.1-102.el7_3.7.noarch
libselinux-python-2.5-6.el7.x86_64
selinux-policy-3.13.1-102.el7_3.7.noarch
container-selinux-2.2-3.el7.noarch
libselinux-utils-2.5-6.el7.x86_64

type=AVC msg=audit(1484146410.625:156): avc:  denied  { entrypoint } for  pid=2831 comm="exe" path="/usr/local/bin/etcd" dev="dm-8" ino=8388868 scontext=system_u:system_r:spc_t:s0:c590,c748 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c590,c748 tclass=file
type=AVC msg=audit(1484146437.147:168): avc:  denied  { entrypoint } for  pid=3102 comm="exe" path="/usr/local/bin/etcd" dev="dm-9" ino=8388868 scontext=system_u:system_r:spc_t:s0:c73,c888 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c73,c888 tclass=file
type=AVC msg=audit(1484146454.690:174): avc:  denied  { entrypoint } for  pid=3269 comm="exe" path="/usr/local/bin/etcd" dev="dm-9" ino=8388868 scontext=system_u:system_r:spc_t:s0:c184,c206 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c184,c206 tclass=file
type=AVC msg=audit(1484146479.755:179): avc:  denied  { entrypoint } for  pid=3375 comm="exe" path="/usr/local/bin/etcd" dev="dm-9" ino=8388868 scontext=system_u:system_r:spc_t:s0:c245,c784 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c245,c784 tclass=file
type=AVC msg=audit(1484146529.400:190): avc:  denied  { entrypoint } for  pid=3637 comm="exe" path="/usr/local/bin/etcd" dev="dm-9" ino=8388868 scontext=system_u:system_r:spc_t:s0:c893,c1013 tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c893,c1013 tclass=file

Could you make sure container-selinux successfully installed?

dnf reinstall container-selinux

I tried with that latest version of container-selinux yesterday on CentOS
and also I tried the container-selinux that's in updates-testing for f25,
same issue.

On Jan 11, 2017 07:32, "Daniel J Walsh" notifications@github.com wrote:

Could you make sure container-selinux successfully installed?

dnf reinstall container-selinux

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubeadm/issues/107#issuecomment-271899571,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAG52tRhNh6En1hfbZJzzGISzNUnhVfKks5rRPYUgaJpZM4Le4yP
.

I think this block is silently failing.

optional_policy(`
virt_stub_svirt_sandbox_file()
virt_transition_svirt_sandbox(spc_t, system_r)
virt_sandbox_entrypoint(spc_t)
virt_sandbox_domtrans(container_runtime_t, spc_t)
')

Still failing after reinstall.

I am setting up a CENTOS machine to check what is going wrong.

Thanks for the help everyone, it would be cool if we would be able to have setenforce 1 working with kubeadm in the next release ;)

Hi,

I tried with fedora 25 cloud base, the same result as in, it not installing, but different messages:

time->Sat Jan 21 13:55:37 2017

type=AVC msg=audit(1485006937.554:8062): avc: denied { create } for pid=676 comm="etcd" name="data" scontext=system_u:system_r:container_t:s0:c358,c612 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=0

time->Sat Jan 21 13:57:08 2017

type=AVC msg=audit(1485007028.572:8075): avc: denied { create } for pid=1181 comm="etcd" name="data" scontext=system_u:system_r:container_t:s0:c358,c612 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=0

time->Sat Jan 21 13:59:53 2017
type=AVC msg=audit(1485007193.515:8088): avc: denied { create } for pid=1780 comm="etcd" name="data" scontext=system_u:system_r:container_t:s0:c358,c612 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=0

[root@master-01 vagrant]# rpm -qa |grep selinux
libselinux-python3-2.5-13.fc25.x86_64
selinux-policy-3.13.1-225.6.fc25.noarch
libselinux-2.5-13.fc25.x86_64
libselinux-utils-2.5-13.fc25.x86_64
rpm-plugin-selinux-4.13.0-6.fc25.x86_64
selinux-policy-targeted-3.13.1-225.6.fc25.noarch
libselinux-python-2.5-13.fc25.x86_64
container-selinux-2.2-2.fc25.noarch

Oops, sorry for the markup in the last post, have no clue how that happened ;)

@coeki it's because of the # at the beginning of those lines. I suggest indenting the lines with four spaces, which lets GH know that it's code.

@jberkus thanks, I'll keep that in mind.

I'm going to try fedora atomic, which is a whole new ball game to me. But since @rhatdan said it works under that, I'm curious. I'll report my findings, if I get that working ;)

That is showing that you volume mounted in a directory from the host into docker container, without relabeling. I think this is a known issue in k8s and supposedly fixed in newer versions.

I just tested this with f25 and the kubeadm guide from here. It's stuck at [apiclient] Created API client, waiting for the control plane to become ready

[root@fedora-1 ~]# rpm -q docker container-selinux kubelet kubeadm
docker-1.12.6-5.git037a2f5.fc25.x86_64
container-selinux-2.2-2.fc25.noarch
kubelet-1.5.1-0.x86_64
kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64
[root@fedora-1 ~]# ausearch -m avc -ts recent
----
time->Wed Jan 25 18:47:12 2017
type=AVC msg=audit(1485388032.826:415): avc:  denied  { create } for  pid=9080 comm="etcd" name="data" scontext=system_u:system_r:container_t:s0:c159,c642 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=0
----
time->Wed Jan 25 18:52:43 2017
type=AVC msg=audit(1485388363.049:459): avc:  denied  { create } for  pid=9940 comm="etcd" name="data" scontext=system_u:system_r:container_t:s0:c159,c642 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=0

I have a modified version tweaked to work with atomic in this copr, that includes v.1.5.2, but that's also affected.

If I go back to the last docker version for fedora before the docker-selinux to container-selinux transition, kubeadm works just fine w/ selinux enforcing:

# rpm -q docker docker-selinux
docker-1.12.1-13.git9a3752d.fc25.x86_64
docker-selinux-1.12.1-13.git9a3752d.fc25.x86_64

Try installing container-selinux from updates testing, although I still think the issue here is with kubernetes handling the labeling.

Here's with container-selinux-2.4-1.fc25.noarch:

time->Thu Jan 26 02:49:26 2017
type=AVC msg=audit(1485416966.739:468): avc:  denied  { create } for  pid=9778 comm="etcd" name="data" scontext=system_u:system_r:container_t:s0:c213,c267 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=0
----
time->Thu Jan 26 02:50:13 2017
type=AVC msg=audit(1485417013.023:512): avc:  denied  { create } for  pid=11274 comm="etcd" name="data" scontext=system_u:system_r:container_t:s0:c164,c200 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=1
----
time->Thu Jan 26 02:50:13 2017
type=AVC msg=audit(1485417013.023:513): avc:  denied  { create } for  pid=11274 comm="etcd" name=".touch" scontext=system_u:system_r:container_t:s0:c164,c200 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=file permissive=1
----
time->Thu Jan 26 02:50:13 2017
type=AVC msg=audit(1485417013.023:514): avc:  denied  { write open } for  pid=11274 comm="etcd" path="/var/etcd/data/.touch" dev="dm-0" ino=33776166 scontext=system_u:system_r:container_t:s0:c164,c200 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=file permissive=1
----
time->Thu Jan 26 02:50:13 2017
type=AVC msg=audit(1485417013.023:515): avc:  denied  { unlink } for  pid=11274 comm="etcd" name=".touch" dev="dm-0" ino=33776166 scontext=system_u:system_r:container_t:s0:c164,c200 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=file permissive=1
----
time->Thu Jan 26 02:50:13 2017
type=AVC msg=audit(1485417013.023:516): avc:  denied  { read } for  pid=11274 comm="etcd" path="/var/etcd/data/member/snap/db" dev="dm-0" ino=498029 scontext=system_u:system_r:container_t:s0:c164,c200 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=file permissive=1
----
time->Thu Jan 26 02:50:13 2017
type=AVC msg=audit(1485417013.023:517): avc:  denied  { lock } for  pid=11274 comm="etcd" path="/var/etcd/data/member/snap/db" dev="dm-0" ino=498029 scontext=system_u:system_r:container_t:s0:c164,c200 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=file permissive=1
----
time->Thu Jan 26 02:50:13 2017
type=AVC msg=audit(1485417013.041:518): avc:  denied  { rename } for  pid=11274 comm="etcd" name="wal.tmp" dev="dm-0" ino=17291587 scontext=system_u:system_r:container_t:s0:c164,c200 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=1

What I don't understand is how the same version of kubernetes that doesn't work w/ the current or testing container-selinux works just fine w/ the older docker and docker-selinux. Is container_t allowing everything that spc_t used to allow?

@jasonbrooks if you docker ps -a, find the failing etcd container, and docker inspect it what do you see for security options?

If it's something like this:

  "SecurityOpt": [
                "seccomp=unconfined",
                "label:type:spc_t",
                "label=user:system_u",
                "label=role:system_r",
                "label=type:svirt_lxc_net_t",
                "label=level:s0:c803,c806"
            ],

You may be hitting a separate issue surfacing with Docker 1.12+, which pmorie is working on here: https://github.com/kubernetes/kubernetes/pull/40179

@dgoodwin I just did a fresh install on fedora 25, and can confirm everything @jasonbrooks is seeing.

ouput of docker inspect the etcd container:

"SecurityOpt": [
            "seccomp=unconfined",
            "label:type:spc_t",
            "label=user:system_u",
            "label=role:system_r",
            "label=type:container_t",
            "label=level:s0:c122,c403"
        ],

rpm -q container-selinux:

container-selinux-2.2-2.fc25.noarch

A bit more info. When using docker-1.12.1-13.git9a3752d.fc25.x86_64 and docker-selinux (which is docker api v1.24), here are the etcd container security options:

"SecurityOpt": [
                "seccomp=unconfined",
                "label:type:spc_t"
            ],

With docker-1.12.6-5.git037a2f5.fc25.x86_64 (also docker api v1.24) and container-selinux, the security options are:

"SecurityOpt": [
                "seccomp=unconfined",
                "label:type:spc_t",
                "label=user:system_u",
                "label=role:system_r",
                "label=type:container_t",
                "label=level:s0:c306,c898"
            ],

Transitioning back to this issue because I believe we still have container-selinux problems with Docker 1.12.

If you apply the fix from https://github.com/kubernetes/kubernetes/issues/37807 this will correct the above label separator issue so all labels show up with =. However, the problem remains, container_t continues to be added after spc_t and the pod will be denied.

The simplest reproducer I can find is just a local hack cluster and creating the below manifest:

Fedora 25, kubernetes devel with following packages:

docker-1.12.6-5.git037a2f5.fc25.x86_64
container-selinux-2.4-1.fc25.noarch
$ sudo ALLOW_SECURITY_CONTEXT=1 PATH=$PATH:/home/dgoodwin/go/src/k8s.io/kubernetes/third_party/etcd:/home/dgoodwin/go/bin hack/local-up-cluster.sh

in another terminal:

$ kubectl --kubeconfig /var/run/kubernetes/admin.kubeconfig create -f https://gist.githubusercontent.com/dgoodwin/1c19d2ad184ff792f786fec3cd137d0b/raw/beaaaa466b1073cacf4ec92f8ade9da28ad3233e/etcd.json



md5-4ae8d3f78540a9a1aead6709cda723e4



type=AVC msg=audit(1485449563.680:10995): avc:  denied  { create } for  pid=17157 comm="etcd" name=".touch" scontext=system_u:system_r:container_t:s0:c632,c788 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=file permissive=0



md5-cefed6e5c6deb76a7e4f6a58d8b17b81



            "SecurityOpt": [
                "seccomp=unconfined",
                "label=type:spc_t",
                "label=user:system_u",
                "label=role:system_r",
                "label=type:container_t",
                "label=level:s0:c632,c788"
            ],

container_t is superceding the spc_t we requested in the manifest as you can see here: https://gist.githubusercontent.com/dgoodwin/1c19d2ad184ff792f786fec3cd137d0b/raw/beaaaa466b1073cacf4ec92f8ade9da28ad3233e/etcd.json

CC @pweil @eparis @pmorie @rhatdan

container-selinux has nothing to do with this. Looks like it is definitely a docker issue. You should be able to install the older version of docker with container-selinux and I would bet the old behavior returns.

It seems to work properly from the client.

docker run -d --security-opt label=type:spc_t fedora sleep 50

            "SecurityOpt": [
                "label=type:spc_t"
            ],

But this shows similar bugs, but not all of the other content

docker run -ti --security-opt label=type:spc_t --security-opt label=type:container_t fedora sleep 50

            "SecurityOpt": [
                "label=type:spc_t",
                "label=type:container_t"
            ],

@rhatdan @mrunalp with @dgoodwin reproducer I was able to find out what's causing this for projectatomic/docker with 1.12.6 - which is this patch https://github.com/projectatomic/docker/commit/07f6dff6273f98a2da8731b87f8dd98d86a5d6ff

Reverting that apparently fixes this issue. However that patch seems to be needed and docker upstream has that patch in their latest docker 1.13.0 release (so upstream has this very same issue in 1.13 but not in upstream 1.12.6).

@rhatdan what's the expected stuff that goes into SecurityOpt if you specify more than one label with security-opt flag at docker run?

docker run -ti --security-opt label=type:spc_t --security-opt label=type:container_t fedora sleep 50

do you want the security-opt to be just container_t or spc_t? I mean, it must be one or the other right? both cannot coexist right?

I am not sure if this is a docker problem or a kubernetes problem from reading the patch above, it looks like the pause container is running under confinement, and we are joining the container process to a running container?

The issue seems simple though, here's the config used to create the pause and the etcd container (with the etcd container joining the ipc namespace of the pause container):

Jan 28 12:55:23 runcom.usersys.redhat.com dockerd-current[4199]: time="2017-01-28T12:55:23.362797988+01:00" level=debug msg="form data: {\"AttachStderr\":false,\"AttachStdin\":false,\"AttachStdout\":false,\"Cmd\":null,\"Domainname\":\"\",\"Entrypoint\":null,\"Env\":[\"KUBERNETES_PORT=tcp://10.0.0.1:443\",\"KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443\",\"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\"KUBERNETES_PORT_443_TCP_PORT=443\",\"KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1\",\"KUBERNETES_SERVICE_HOST=10.0.0.1\",\"KUBERNETES_SERVICE_PORT=443\",\"KUBERNETES_SERVICE_PORT_HTTPS=443\"],\"HostConfig\":{\"AutoRemove\":false,\"Binds\":null,\"BlkioDeviceReadBps\":null,\"BlkioDeviceReadIOps\":null,\"BlkioDeviceWriteBps\":null,\"BlkioDeviceWriteIOps\":null,\"BlkioWeight\":0,\"BlkioWeightDevice\":null,\"CapAdd\":null,\"CapDrop\":null,\"Cgroup\":\"\",\"CgroupParent\":\"\",\"ConsoleSize\":[0,0],\"ContainerIDFile\":\"\",\"CpuCount\":0,\"CpuPercent\":0,\"CpuPeriod\":0,\"CpuQuota\":0,\"CpuShares\":2,\"CpusetCpus\":\"\",\"CpusetMems\":\"\",\"Devices\":[],\"DiskQuota\":0,\"Dns\":[\"8.8.8.8\"],\"DnsOptions\":null,\"DnsSearch\":[\"vutbr.cz\",\"usersys.redhat.com\"],\"ExtraHosts\":null,\"GroupAdd\":null,\"IOMaximumBandwidth\":0,\"IOMaximumIOps\":0,\"IpcMode\":\"\",\"Isolation\":\"\",\"KernelMemory\":0,\"Links\":null,\"LogConfig\":{\"Config\":null,\"Type\":\"\"},\"Memory\":0,\"MemoryReservation\":0,\"MemorySwap\":-1,\"MemorySwappiness\":null,\"NetworkMaximumBandwidth\":0,\"NetworkMode\":\"\",\"OomKillDisable\":null,\"OomScoreAdj\":-998,\"PidMode\":\"\",\"PidsLimit\":0,\"PortBindings\":{},\"Privileged\":false,\"PublishAllPorts\":false,\"ReadonlyRootfs\":false,\"RestartPolicy\":{\"MaximumRetryCount\":0,\"Name\":\"\"},\"SecurityOpt\":[\"seccomp=unconfined\"],\"ShmSize\":6.7108864e+07,\"StorageOpt\":null,\"UTSMode\":\"\",\"Ulimits\":null,\"UsernsMode\":\"\",\"VolumeDriver\":\"\",\"VolumesFrom\":null},\"Hostname\":\"etcd\",\"Image\":\"gcr.io/google_containers/pause-amd64@sha256:163ac025575b775d1c0f9bf0bdd0f086883171eb475b5068e7defa4ca9e76516\",\"Labels\":{\"io.kubernetes.container.hash\":\"f932fb66\",\"io.kubernet
Jan 28 12:55:23 runcom.usersys.redhat.com dockerd-current[4199]: es.container.name\":\"POD\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"\",\"io.kubernetes.container.terminationMessagePolicy\":\"\",\"io.kubernetes.pod.name\":\"etcd\",\"io.kubernetes.pod.namespace\":\"default\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\",\"io.kubernetes.pod.uid\":\"a63cff8a-e550-11e6-8c99-507b9d4141fa\"},\"NetworkingConfig\":null,\"OnBuild\":null,\"OpenStdin\":false,\"StdinOnce\":false,\"Tty\":false,\"User\":\"\",\"Volumes\":null,\"WorkingDir\":\"\"}"


Jan 28 12:55:23 runcom.usersys.redhat.com dockerd-current[4199]: time="2017-01-28T12:55:23.948539974+01:00" level=debug msg="form data: {\"AttachStderr\":false,\"AttachStdin\":false,\"AttachStdout\":false,\"Cmd\":null,\"Domainname\":\"\",\"Entrypoint\":[\"etcd\",\"--listen-client-urls=http://127.0.0.1:3379\",\"--advertise-client-urls=http://127.0.0.1:3379\",\"--data-dir=/var/lib/etcd\"],\"Env\":[\"KUBERNETES_SERVICE_PORT=443\",\"KUBERNETES_SERVICE_PORT_HTTPS=443\",\"KUBERNETES_PORT=tcp://10.0.0.1:443\",\"KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443\",\"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\"KUBERNETES_PORT_443_TCP_PORT=443\",\"KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1\",\"KUBERNETES_SERVICE_HOST=10.0.0.1\"],\"HostConfig\":{\"AutoRemove\":false,\"Binds\":[\"/etc/ssl/certs:/etc/ssl/certs\",\"/var/lib/etcd:/var/lib/etcd\",\"/etc/kubernetes:/etc/kubernetes/:ro\",\"/var/lib/kubelet/pods/a63cff8a-e550-11e6-8c99-507b9d4141fa/volumes/kubernetes.io~secret/default-token-gvxcc:/var/run/secrets/kubernetes.io/serviceaccount:ro,Z\",\"/var/lib/kubelet/pods/a63cff8a-e550-11e6-8c99-507b9d4141fa/etc-hosts:/etc/hosts:Z\",\"/var/lib/kubelet/pods/a63cff8a-e550-11e6-8c99-507b9d4141fa/containers/etcd/692fbb22:/dev/termination-log:Z\"],\"BlkioDeviceReadBps\":null,\"BlkioDeviceReadIOps\":null,\"BlkioDeviceWriteBps\":null,\"BlkioDeviceWriteIOps\":null,\"BlkioWeight\":0,\"BlkioWeightDevice\":null,\"CapAdd\":null,\"CapDrop\":null,\"Cgroup\":\"\",\"CgroupParent\":\"\",\"ConsoleSize\":[0,0],\"ContainerIDFile\":\"\",\"CpuCount\":0,\"CpuPercent\":0,\"CpuPeriod\":0,\"CpuQuota\":0,\"CpuShares\":204,\"CpusetCpus\":\"\",\"CpusetMems\":\"\",\"Devices\":[],\"DiskQuota\":0,\"Dns\":null,\"DnsOptions\":null,\"DnsSearch\":null,\"ExtraHosts\":null,\"GroupAdd\":null,\"IOMaximumBandwidth\":0,\"IOMaximumIOps\":0,\"IpcMode\":\"container:2b4df020a63fa9796c860a065135e06affd372d38dc9e323ca66c0fe010de469\",\"Isolation\":\"\",\"KernelMemory\":0,\"Links\":null,\"LogConfig\":{\"Config\":null,\"Type\":\"\"},\"Memory\":0,\"MemoryReservation\":0,\"MemorySwap\":-1,\"MemorySwappiness\":null,\"NetworkMaximumBandwidth\":0,\"NetworkMode\":\"con
Jan 28 12:55:23 runcom.usersys.redhat.com dockerd-current[4199]: tainer:2b4df020a63fa9796c860a065135e06affd372d38dc9e323ca66c0fe010de469\",\"OomKillDisable\":null,\"OomScoreAdj\":999,\"PidMode\":\"\",\"PidsLimit\":0,\"PortBindings\":null,\"Privileged\":false,\"PublishAllPorts\":false,\"ReadonlyRootfs\":false,\"RestartPolicy\":{\"MaximumRetryCount\":0,\"Name\":\"\"},\"SecurityOpt\":[\"seccomp=unconfined\",\"label:type:spc_t\"],\"ShmSize\":6.7108864e+07,\"StorageOpt\":null,\"UTSMode\":\"\",\"Ulimits\":null,\"UsernsMode\":\"\",\"VolumeDriver\":\"\",\"VolumesFrom\":null},\"Hostname\":\"\",\"Image\":\"gcr.io/google_containers/etcd-amd64@sha256:b7b54201ba7ae22e1b7993d86d90615646a736a23abd8561f6012bb0e3dcc075\",\"Labels\":{\"io.kubernetes.container.hash\":\"f46ae33b\",\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.name\":\"etcd\",\"io.kubernetes.pod.namespace\":\"default\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\",\"io.kubernetes.pod.uid\":\"a63cff8a-e550-11e6-8c99-507b9d4141fa\"},\"NetworkingConfig\":null,\"OnBuild\":null,\"OpenStdin\":false,\"StdinOnce\":false,\"Tty\":false,\"User\":\"\",\"Volumes\":null,\"WorkingDir\":\"\"}"

You can see (well, find) from the above that the pause container is created withouth spc_t label. The etcd container is instead created with spc_t label.
In docker, when a container join the ipc namespace of another, it gets selinux labels from the target container. In this case, docker gets the labels from the pause container, which is container_t. It then appends that to the security labels of etcd which is spc_t. Eventually you have both and things blow up.

So, @rhatdan, my question is, what do we have to do? should kubernetes run the pause container with spc_t as well so that it won't default to container_t or etcd container should override the label of the pause container if etcd has one? (in this case it has spc_t).

Hope everything is clear.

If the app is being specified to run as spc_t, then the pause and all of the containers should run as spc_t.

The docker issue, is saying that label:type:spc_t is being passed? Instead of label=type:spc_t? So it is one of these issues causing the problem.

Figured this out with @rhatdan offline. The issue is that the pause container should be running with spc_t as well basically.

@dgoodwin do you know where the security options of a pod are passed down to create the pause container? Those security options from the pod have to be applied to the pause container as well or everything blows up because those labels can talk to each other.

Figured out that the pod security opts is nil here https://gist.githubusercontent.com/dgoodwin/1c19d2ad184ff792f786fec3cd137d0b/raw/beaaaa466b1073cacf4ec92f8ade9da28ad3233e/etcd.json even if inside containers there's a container with a selinux label set.

How would you guys extract the selinux label from a container in containers in PodSpec? containers in containers in PodSpec needs to have some way to agree on just _a_ selinux label and that label must be used for the infra container as well.

There might be a bug in runc though when setting the initial process labels I guess. found it

@runcom Bug would either be in docker or k8s. runc is simply just applying whatever is passed down. Are we ensuring that all the containers in the pod are getting the same label?

@runcom Bug would either be in docker or k8s. runc is simple just applying whatever is passed down. Are we ensuring that all the containers in the pod are getting the same label?

I meant, there's another bug with selinux labels _in docker_. The label isn't applied to the process correctly.

Alright, the thing is containers in a pod have to have the same selinux label and it doesn't make sense to have different labels for each containers in a pod. There's already a pod level selinux label and each container must use that, even the infra container.

Does anybody agree with that? In which case, why can you define a selinux label for containers in a pod? That should be an error to me.

@liggitt @pmorie not sure who else... right now SecurityContext is a container API object. But docker, @runcom and @rhatdan are saying that it must be a pod level object...

Do we need to move it from container to pod ? Which would be an API issue obviously, but if docker just can't handle it at the container level...

I guess to be accurate I should say it can be set in both pod and container. But if the container is always ignored and just shared with the pause container (which I believe is only set at the pod level), should we get rid of it at the container level?

@eparis, How about having one at the pod level and allowing override in container in the future if we have policy for that?

I believe that's because you may have multiple containers and if any container specify a different label, which one would you pick? I guess the main point is, the pod as a unit should only define this kind of options. Containers in a pod are bound to this pod constraints no matter what right?

I believe today the pod level security context (if defined) is applied to all containers. If a container level security context is defined kube attempts to overwrite the pod level context. What I'm hearing from you guys is that the container level security context is always overridden by the pause container context. and the pause container is always == the pod level security context. If there was no pod level security context the pause container gets the docker default and thus the (other) container level security contexts are ignored...

thus today the container security context is useless, confusing, and never worked...

Should docker start rejecting container definitions that try to set selinux labels which will be ignored (because of some shared namespace) ? Typically SELinux is big on rejecting invalid situations instead of leading the user to believe they have security properties which they don't.

Yeah, we can tighten up docker to reject invalid SELinux config if there are shared namespaces.

@smarterclayton you probably should read through this as well. Basically container level selinux definitions have worked in the past but were broken by 1.12.5 and 1.13

(mistype and I editted the last comment). CONTAINER level selinux definitions have never worked.

I just tried a bunch of combinations, starting with docker-1.12.1-13.git9a3752d.fc25.x86_64
and container-selinux-2.2-2.fc25.noarch, indeed, container-selinux didn't make a difference.

Things worked up until docker-1.12.4.

I went to see what changed between .3 and .4 and it looks like you guys are on the case: https://github.com/docker/docker/pull/26792/commits/4c10c2ded38031b20f5a0a409dd24643625fa878 :)

A fix was made to docker-1.13 that was back ported to docker-1.12 which reveals a bug in k8s. Basically the pod and all containers in it, need to run as the same SELinux context. Currently k8s is not setting spc_t to the pause container. When the container process gets added to the pause container, it is sharing the same IPC namespace. Docker now assigns the SELinux label of the pause container to the newly created container, which is the expected behaviour. In a POD we need to have all of the containers running with the same SELinux label. If we fix k8s to assign the label when creating the POD everything will work.

@eparis I guess theoretically we could allow multiple labels to be assigned, and could check in docker if a label is set before setting the container label to match the container it is joining IPC Namespace to.

I could see where you might want to run the main container locked down, but maybe the side car container with a looser policy.

Moving the securityContext up from container to pod spec allows the container to run properly, I will submit a patch for kubeadm to move this up a level for the time being while this gets sorted out. Thank you all for the help figuring out what is going wrong and where.

Yes, many thanks for this investigation! I'm a noob at selinux (haven't ever used it, I'm more of a debian guy), so this effort is worth its weight in gold.

I guess we're hoping for a resolution to this in k8s v1.6, right?

Also, do network providers like weave, flannel, etc. add these selinux rules as well in order for their hostpath mounts to work properly as well? If that's the case, we have to inform them in time for the next release.

Thanks all for having a look at this. So to recap, tell me if I read it wrong:

1 Issue with the format of the label ":" vs"=" worked on here: #https://github.com/kubernetes/kubernetes/pull/40179
2 Issue with labels getting attached twice, also handled in above PR
3 Parsing of labels as reported by @runcom, handled here #https://github.com/opencontainers/runc/pull/1303
4 Wider discussion on whether security ops should work on container level inside a pod or just pod wise.

@luxas I was first trying to fix kubeadm and then have a look at what exactly goes wrong with the network providers like weave, flannel, canal, calico and romana, probably the same isssues, but we'll have to pinpoint first I think, so we can tell them what to fix.

3 Parsing of labels as reported by @runcom, handled here #opencontainers/runc#1303

this is just for docker > 1.12.6 - docker in Fedora/RHEL has 1.12.6 which correctly applies label

I believe today the pod level security context (if defined) is applied to all containers. If a container level security context is defined kube attempts to overwrite the pod level context

This is correct. It is done in the security context provider by the DetermineEffectiveSecurityContext method.

Basically container level selinux definitions have worked in the past but were broken by 1.12.5 and 1.13

Yeah, we can tighten up docker to reject invalid SELinux config if there are shared namespaces.

@pmorie @mrunalp @eparis - help me understand the correct direction for the SC provider in this scenario. It sounds like since we always share the IPC namespace with the pause container that:

  1. if there is any selinux label on the a container it needs applied to the pause container in order to share the IPC namespace
  2. per-container settings don't really make sense if there is more than one container in the pod with SELinux settings and those settings do not match?

if item 2 is true then I think that is a strong argument to only have pod level settings.

I'd argue that docker needs to start the pause container with the pod level security context (if defined). It needs to start the actual containers with the container security context (if defined) fallback to the pod level context (if defined) and then fall back to 'shared with pause'. (if undefined)

I do not understand why the shared IPC namespace should be relevant to this discussion. If the user defines a combination of contexts that selinux won't allow to work, then things shouldn't work. But docker should never 'overwrite' anything the user asked for.

I'd argue that docker needs to start the pause container with the pod level security context (if defined). It needs to start the actual containers with the container security context (if defined) fallback to the pod level context (if defined) and then fall back to 'shared with pause'. (if undefined)

Then I think that there isn't anything to do on the provider side. It should already be setting the pause container settings to the pod level SC if defined and actual containers get the merge of pod + container overrides which means it should share settings with pause if things are defined on the pod level and not the container level.

If the user defines a combination of contexts that selinux won't allow to work, then things shouldn't work. But docker should never 'overwrite' anything the user asked for.

Agree. Thanks!

Yes @eparis is correct. I have to get a patch to docker to handle the labeling correctly. We need to allow user flexibility (IE The ability to break his system if he so chooses.)

The default case should be that all container processes sharing the same IPC namespace have share the same SELinux label, but if a user asks to override the label, it should be allowed.

The problem in this case, although I can't put my finger on it, seems a little bit more complex.
The security options are not overwritten but added twice.

You can actually see it a little when this fix is not applied:

1 Issue with the format of the label ":" vs"=" worked on here: #kubernetes/kubernetes#40179

docker inspect the pause container:

"SecurityOpt": [
            "seccomp=unconfined",
            "label:type:spc_t"
        ],

Docker inspect the etcd container:

  "SecurityOpt": [
            "seccomp=unconfined",
            "label:type:spc_t",
            "label=user:system_u",
            "label=role:system_r",
            "label=type:spc_t",
            "label=level:s0:c23,c719"

The label with the format "label:" is coming from the manifest with which we started the deployment of the etcd pod. At least I think so.

Kubelet or docker seems to add the others, which kinda make sense as they are for the image files placed in /var/lib/docker/containers and /var/lib/kubelet/pods

From the discussion in #kubernetes/kubernetes#40179 we know docker is not preventing adding more security opts then actually should be allowed.
But where or how this add is taking place I can't tell.

@eparis @rhatdan Could you help check this issue too? Why seLinuxOptions in pod level and container level has different result ? thanks https://github.com/kubernetes/kubernetes/issues/37809

I have a pull request to help fix this issue in for docker
https://github.com/docker/docker/pull/30652

We have already described the issue.

In standard docker if you add a container with one label and then add another container which you want to join the same IPC Namespace, docker will take the label of the first container and assign it to the second container. This way SELinux will not block access to IPC created by the first container that needs to be used by the second container.

The SELinux type/mcs label at the Pod Level sets the label for the pause container. The SELinux Type/MCS Label at the container label sets it for each container that is joining the pause container.
If Kubernetes does not set a label for the pause container it will get labeled something like container_t:s0:c1,c2. Then kubernetes tries to add a container with a label like spc_t:s0. In older versions of docker, docker would see that the security-opt field was set and not call the code to merge the SELinux labels. But this was causing problems with people setting seccomp, basically we would not get the expected behaviour of two containers sharing the same SELinux label, if any non label field was set in the security-opt. A patch was merged upstream to take a way the security-opt check. This introduced a bug where if a user did specify an SELinux label security-opt of a container that was joining another container to ignore the security opt.

Bottom Line:

Old docker, if pause/POD container came in with no label it would start with container_t:s0:c1,c2. If new container was sent in to join the pause container, with spc_t:s0. Docker would launch the second container with spc_t:s0, and k8s was happy. After fix the second container ended up with the more restrictive container_t:s0:c1,c2 and k8s was unhappy. Correct fix to the problem on the k8s side is to set the pause/POD label to spc_t:s0 and the added container will match as spc_t:s0.

With my patch above we will allow users to specify alternate selinux labels for each container added, but this should only be done by people who understand how SELinux confinement works. IE You might want to set up a multi container pod where one container has more/less rights then the container it is joining.

The docker patch would have fixed the k8s problem, also but it does not eliminate the fact that the pause container probably should be running with the same SELinux label as the containers joining it.

@rhatdan thanks so much

So.........how to test? @dgoodwin @jasonbrooks @rhatdan @jessfraz?

I'm sorry not well versed in building stuff, but willing to do so....I have some clue, but any help welcome.
kubernetes/kubernetes#40179 moved.....so how do I include that?

Thanks

Ok figured it out over the weekend, but it still not working.

Ok, build kubeadm and kubelet (this part I forgot, hence my latest comment) myself from master branch with kubernetes/kubernetes#40903 merged and it works with selinux enforcing, even with weave running, although weave is not running correctly. That's a weave issue to deal with later as well as the other providers I still have to test.
But at least no selinux denials with selinux enforcing.

Regarding the double entries referenced in kubernetes/kubernetes#37807:

  "SecurityOpt": [
            "seccomp=unconfined",
            "label=type:spc_t",
            "label=user:system_u",
            "label=role:system_r",
            "label=type:spc_t",
            "label=level:s0:c210,c552"

I think that stems from some validation when DetermineEffectiveSecurityContext is run, as @pweil- mentioned, and just added rather then replaced as docker security option. Since docker runs with all options passed to but only uses the last security options passed, this seems to work. Not sure if that is expected behavior for docker, but that's a diff matter. @rhatdan might want to take another look at that, regarding docker/docker#30652

Not sure if we should close the issue, till we get the rpms from official repo's, test again and so forth, test the pod networking first, and fix it, update docs etc, before closing. So let me know.

Thanks all for fixing this :)

This looks correct, although with the patch I have for upstream docker you would end up with only one "label=type:spc_t" field.

@rhatdan question, because I build docker with your patch, well maybe I didn't do it right, but I think it did. And I still see this.

this behavior didn't change:
docker run -ti --security-opt label=type:container_t --security-opt label=type:spc_t fedora sleep 5

 "MountLabel": "system_u:object_r:container_file_t:s0:c118,c497",
    "ProcessLabel": "system_u:system_r:spc_t:s0:c118,c497",

snip

"SecurityOpt": [
"label=type:container_t",
"label=type:spc_t"

it runs with your patch, none the wiser.

Docker accepts that, is the patch breaking that? Is there a sane use case that docker should be able to do that?

From this issue, if docker changes that, seeing we (kubernetes pod runtime ) end up we double labels, which I think come from the validation in kubelet, if you break the fact docker accepts more options then it should...well it breaks again.

Look if sane, we should be able to run a pod with diff security options for containers within a pod, and I think that was the intention, which should be possible, but an alignment is needed. Docker and kubernetes can not have diff expectations about this.

I might be wrong, adding @eparis @dgoodwin @jasonbrooks @luxas @pweil- @pmorie

My change was not to prevent that although we probably should. My change was to block on container joining the IPC Namespace or pid namespace of another

# docker run -d --name test fedora sleep 10
# docker run -d security-opt label=type:spc_t --ipc container:test fedora cat /proc/self/attr/current

The second container should be running as spc_t. Where the old code would have had it running as container_t. This basically would mimic two containers running in the same pod with different SELinux labels.

Thanks @rhatdan for clarying.

Closing this one, as the rbac enabled changes for pod networking are tracked #143

Not sure about the double labels, it seems interaction between kubernetes/kubelet and docker as mentioned above.

Anyway filed this for docker #https://github.com/docker/docker/issues/31328

Thanks all,

Im running with Enforcing and having no issues with flannel, FYI. Im wondering if specific SELinux configurations break it and others dont? Not an SELinux expert btw.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jessfraz picture jessfraz  Â·  3Comments

ggaaooppeenngg picture ggaaooppeenngg  Â·  4Comments

jbrandes picture jbrandes  Â·  4Comments

mlevesquedion picture mlevesquedion  Â·  3Comments

ggee picture ggee  Â·  4Comments