Is this a BUG REPORT or FEATURE REQUEST?
/kind bug
Environment:
minikube version: v0.23.0
OS:
NAME=Fedora
VERSION="25 (Workstation Edition)"
ID=fedora
VERSION_ID=25
PRETTY_NAME="Fedora 25 (Workstation Edition)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:25"
HOME_URL="https://fedoraproject.org/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=25
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=25
PRIVACY_POLICY_URL=https://fedoraproject.org/wiki/Legal:PrivacyPolicy
VARIANT="Workstation Edition"
VARIANT_ID=workstation
VM driver:
"DriverName": "kvm",
ISO version
"ISO": "/home/fedora/.minikube/machines/minikube/boot2docker.iso",
"Boot2DockerURL": "file:///home/fedora/.minikube/cache/iso/minikube-v0.23.6.iso",
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"dirty", BuildDate:"2017-10-17T15:09:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
What happened:
The PodPreset that I create in the minikube environment does not seem to work, how do I enable PodPreset/how do i debug this situation?
What you expected to happen:
The PodPreset applied to the Pod.
How to reproduce it:
Here is the PodPreset config I am using:
$ cat podpreset-preset.yaml
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: allow-database
spec:
selector:
matchLabels:
role: frontend
env:
- name: DB_PORT
value: "6379"
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
And the pod that I am using that would be applied by above PodPreset
$ cat podpreset-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: website
labels:
app: website
role: frontend
spec:
containers:
- name: website
image: nginx
ports:
- containerPort: 80
I have started the minikube cluster with PodPreset as an admission controller by running following command:
$ minikube start --vm-driver kvm --cpus 4 --memory 8192 \
--extra-config=apiserver.admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,\
PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota,PodPreset
source of above command: https://github.com/kubernetes/minikube/blob/master/docs/configuring_kubernetes.md
Create the PodPreset and Pod
$ kubectl create -f podpreset-preset.yaml
podpreset "allow-database" created
$ kubectl create -f podpreset-pod.yaml
pod "website" created
The pod is running:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
website 1/1 Running 0 12s
but the config it has got has no trace of PodPreset
$ kubectl get pod website -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2017-11-21T06:31:12Z
labels:
app: website
role: frontend
name: website
namespace: default
resourceVersion: "76528"
selfLink: /api/v1/namespaces/default/pods/website
uid: 918f366f-ce85-11e7-99c6-52540074eee7
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: website
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-t6zcs
readOnly: true
dnsPolicy: ClusterFirst
nodeName: minikube
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-t6zcs
secret:
defaultMode: 420
secretName: default-token-t6zcs
status:
...
Now how do I know what is wrong with the thing that i am doing, or how would I know if PodPreset is even funcitonal right now?
Output of minikube logs:
Full logs can be found here: http://pastebin.centos.org/436146/
Apparently, I have the same trouble with v0.24.1
Tried both of these, but nothing would start PodPreset admission controller
minikube start --vm-driver kvm2 --memory 4096 --cpus 4 --v 10 --extra-config=apiserver.Admission.Control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount${security_admission},DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodPreset
and
minikube start --vm-driver kvm2 --memory 4096 --cpus 4 --v 10 --extra-config=apiserver.admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount${security_admission},DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodPreset
Test if PodPreset is enabled, source: https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
kubectl create -f https://k8s.io/docs/tasks/inject-data-application/podpreset-preset.yaml
kubectl get podpreset
kubectl create -f https://k8s.io/docs/tasks/inject-data-application/podpreset-pod.yaml
kubectl get pods
kubectl get pod website -o yaml | grep 'mountPath: /cache'
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
I have reproduced the same error in the 0.25.0 version
@itham this works indeed, however, it also seems to disable all other admission plugins.
for anybody who is running into the same problem as me, using all of these admission plugins led to a stable minikube again:
--extra-config=apiserver.Admission.PluginNames=PodPreset,ServiceAccount,NamespaceExists,DefaultStorageClass
By just using @itham 's line I had failing storage-provisioner, kube-dns and dashboard pods in _kube-system_ namespace. They failed with this error message:
kubectl --namespace=kube-system logs kube-dns-54ff9759fc-p82f9 kubedns
I0410 00:34:20.099440 1 dns.go:48] version: 1.14.4-2-g5584e04
F0410 00:34:20.099554 1 server.go:57] Failed to create a kubernetes client: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
This probably happens because of the lacking ServiceAccount admission plugin. Tracing the code, this doesn't make any sense to me though. Clearly the mentioned PluginNames should get appended to the defaults.
Anyway, activating all the mentioned admission plugins fixes it.
As a side note, the recommended sets of admission controllers are documented in this page: https://kubernetes.io/docs/admin/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close