Minikube: Enable CNI network plugin by default

Created on 9 Jun 2017  路  15Comments  路  Source: kubernetes/minikube

Is there any reason for not enabling the CNI network plugin by default?

kinfeature lifecyclrotten

Most helpful comment

This "rotten" issue dance is pretty silly.

All 15 comments

No, I don't think so - other than the fact that we don't really run any integration tests against it today. As a first step, I think we can start running an additional integration tests with CNI enabled.

That's a good idea. And yes, let's start with tests.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

/reopen

If you start minkube without specifiying a specific network plugin, what mode does it use ? kubernetes let the container runtime deal with ?
Thanks

I've tried to use NetworkPolicy in Minikube without success.
It seems that Minikube doesn't have a network provider configured.

__1. Create a Minikube VM__

$ minikube start \
--vm-driver=virtualbox \
--profile kube0 \
--kubernetes-version v1.8.0 \
--extra-config apiserver.Admission.PluginNames="Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,GenericAdmissionWebhook,ResourceQuota" \
--network-plugin cni \
--cpus 4 \
--memory 4096

__2. Create sample pod__
I've followed this guide: https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy

$ kubectl run nginx --image=nginx --replicas=2
$ kubectl expose deployment nginx --port=80

__3. Create a network policy and test__

$ cat nginx-policy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: access-nginx
spec:
  podSelector:
    matchLabels:
      run: nginx
  ingress:
  - from:
    - podSelector:
        matchLabels:
          access: "true"

$ kubectl create -f nginx-policy.yaml
$ kubectl run busybox --rm -ti --labels="access=true" --image=busybox /bin/sh
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.100.0.16:80)
/ #

As you see Connecting to nginx (10.100.0.16:80), the busybox with --labels="access=true" can hit nginx.

Any idea?. Is Minikube configured with a Network Provider to use Network Policy?
Regards.

Seems I have to install a Network Provider in Minikube (v0.25.1). I installed Calico, but now I'm having issues when deploying new pods.
Details here: https://github.com/kubernetes/minikube/issues/2259#issuecomment-379101315

@chilcano by default the CNI configuration in the minikube VM use the simple "bridge" driver (very similar to what you get if you don't use CNI but the docker network, just IP addresses changes). I dont think CNI bridge driver have support for policies. So you've to use a more complex CNI implementation.

Thanks @atoy40 for your help.

I dont think CNI bridge driver have support for policies.

Yes, I think the same, in fact I've installed Calico as Network Provider to use Network Policy, but something isn't working properly in Minikube-CNI-NetworkProvider integration.

Regards.

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

/reopen

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

This "rotten" issue dance is pretty silly.

Was this page helpful?
0 / 5 - 0 ratings