When I am trying to deploy mongodb on minikube v1.9.2 it fails with:
pod has unbound immediate PersistentVolumeClaims
$ minikube start
๐ minikube v1.9.2 on Darwin 10.14.6
โช KUBECONFIG=github/noobaa-operator/kubeconfig
โจ Using the hyperkit driver based on user configuration
๐ Starting control plane node m01 in cluster minikube
๐ฅ Creating hyperkit VM (CPUs=6, Memory=3000MB, Disk=20000MB) ...
๐ณ Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
๐ Enabling addons: default-storageclass, storage-provisioner
๐ Done! kubectl is now configured to use "minikube"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
$ minikube addons list
|-----------------------------|----------|--------------|
| ADDON NAME | PROFILE | STATUS |
|-----------------------------|----------|--------------|
| dashboard | minikube | disabled |
| default-storageclass | minikube | enabled โ
|
| efk | minikube | disabled |
| freshpod | minikube | disabled |
| gvisor | minikube | disabled |
| helm-tiller | minikube | disabled |
| ingress | minikube | disabled |
| ingress-dns | minikube | disabled |
| istio | minikube | disabled |
| istio-provisioner | minikube | disabled |
| logviewer | minikube | disabled |
| metrics-server | minikube | disabled |
| nvidia-driver-installer | minikube | disabled |
| nvidia-gpu-device-plugin | minikube | disabled |
| registry | minikube | disabled |
| registry-aliases | minikube | disabled |
| registry-creds | minikube | disabled |
| storage-provisioner | minikube | enabled โ
|
| storage-provisioner-gluster | minikube | disabled |
|-----------------------------|----------|--------------|
$ minikube config view
- cpus: 6
- memory: 3000
- vm-driver: hyperkit
$ kubectl get pv,pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/db-noobaa-db-0 Pending standard 15m
$ kubectl describe pv,pvc
Name: db-noobaa-db-0
Namespace: test
StorageClass: standard
Status: Pending
Volume:
Labels: app=noobaa
noobaa-db=noobaa
Annotations: volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: noobaa-db-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 91s (x62 over 16m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
$ kubectl describe pod/noobaa-db-0
Name: noobaa-db-0
Namespace: test
Priority: 0
Node: <none>
Labels: app=noobaa
controller-revision-hash=noobaa-db-8485b48f4d
noobaa-db=noobaa
statefulset.kubernetes.io/pod-name=noobaa-db-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/noobaa-db
Init Containers:
init:
Image: noobaa/noobaa-core:5.3.1
Port: <none>
Host Port: <none>
Command:
/noobaa_init_files/noobaa_init.sh
init_mongo
Limits:
cpu: 500m
memory: 500Mi
Requests:
cpu: 500m
memory: 500Mi
Environment: <none>
Mounts:
/mongo_data from db (rw)
/var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-vjwpw (ro)
Containers:
db:
Image: centos/mongodb-36-centos7
Port: <none>
Host Port: <none>
Command:
bash
-c
/opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
Limits:
cpu: 100m
memory: 500M
Requests:
cpu: 100m
memory: 500M
Environment: <none>
Mounts:
/data from db (rw)
/var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-vjwpw (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
db:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: db-noobaa-db-0
ReadOnly: false
noobaa-token-vjwpw:
Type: Secret (a volume populated by a Secret)
SecretName: noobaa-token-vjwpw
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7s (x15 over 17m) default-scheduler running "VolumeBinding" filter plugin for pod "noobaa-db-0": pod has unbound immediate PersistentVolumeClaims
$ kubectl describe pod/noobaa-db-0
Name: noobaa-db-0
Namespace: test
Priority: 0
Node: <none>
Labels: app=noobaa
controller-revision-hash=noobaa-db-8485b48f4d
noobaa-db=noobaa
statefulset.kubernetes.io/pod-name=noobaa-db-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/noobaa-db
Init Containers:
init:
Image: noobaa/noobaa-core:5.3.1
Port: <none>
Host Port: <none>
Command:
/noobaa_init_files/noobaa_init.sh
init_mongo
Limits:
cpu: 500m
memory: 500Mi
Requests:
cpu: 500m
memory: 500Mi
Environment: <none>
Mounts:
/mongo_data from db (rw)
/var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-vjwpw (ro)
Containers:
db:
Image: centos/mongodb-36-centos7
Port: <none>
Host Port: <none>
Command:
bash
-c
/opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
Limits:
cpu: 100m
memory: 500M
Requests:
cpu: 100m
memory: 500M
Environment: <none>
Mounts:
/data from db (rw)
/var/run/secrets/kubernetes.io/serviceaccount from noobaa-token-vjwpw (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
db:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: db-noobaa-db-0
ReadOnly: false
noobaa-token-vjwpw:
Type: Secret (a volume populated by a Secret)
SecretName: noobaa-token-vjwpw
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7s (x15 over 17m) default-scheduler running "VolumeBinding" filter plugin for pod "noobaa-db-0": pod has unbound immediate PersistentVolumeClaims
MacBook-Pro:noobaa-operator liranmauda$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/noobaa-core-0 1/1 Running 2 18m
pod/noobaa-db-0 0/1 Pending 0 18m
pod/noobaa-operator-676b7b4979-6dzsw 1/1 Running 0 18m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/noobaa-db ClusterIP 10.102.213.137 <none> 27017/TCP 18m
service/noobaa-mgmt LoadBalancer 10.106.57.165 <pending> 80:30879/TCP,443:31429/TCP,8445:32204/TCP,8446:32546/TCP 18m
service/s3 LoadBalancer 10.103.141.147 <pending> 80:32217/TCP,443:31257/TCP,8444:31003/TCP 18m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/noobaa-operator 1/1 1 1 18m
NAME DESIRED CURRENT READY AGE
replicaset.apps/noobaa-operator-676b7b4979 1 1 1 18m
NAME READY AGE
statefulset.apps/noobaa-core 1/1 18m
statefulset.apps/noobaa-db 0/1 18m
MacBook-Pro:noobaa-operator liranmauda$ kubectl describe statefulset.apps/noobaa-db
Name: noobaa-db
Namespace: test
CreationTimestamp: Tue, 21 Apr 2020 17:35:38 +0300
Selector: noobaa-db=noobaa
Labels: app=noobaa
Annotations: <none>
Replicas: 1 desired | 1 total
Update Strategy: RollingUpdate
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=noobaa
noobaa-db=noobaa
Service Account: noobaa
Init Containers:
init:
Image: noobaa/noobaa-core:5.3.1
Port: <none>
Host Port: <none>
Command:
/noobaa_init_files/noobaa_init.sh
init_mongo
Limits:
cpu: 500m
memory: 500Mi
Requests:
cpu: 500m
memory: 500Mi
Environment: <none>
Mounts:
/mongo_data from db (rw)
Containers:
db:
Image: centos/mongodb-36-centos7
Port: <none>
Host Port: <none>
Command:
bash
-c
/opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
Limits:
cpu: 100m
memory: 500M
Requests:
cpu: 100m
memory: 500M
Environment: <none>
Mounts:
/data from db (rw)
Volumes: <none>
Volume Claims:
Name: db
StorageClass:
Labels: app=noobaa
Annotations: <none>
Capacity: 50Gi
Access Modes: [ReadWriteMany]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 18m statefulset-controller create Claim db-noobaa-db-0 Pod noobaa-db-0 in StatefulSet noobaa-db success
Normal SuccessfulCreate 18m statefulset-controller create Pod noobaa-db-0 in StatefulSet noobaa-db successful
I am using MacOs (Darwin 10.14.6) and hyperkit.
on v1.8.2 it doesn't happen (i downgraded the minikube and it passed, then upgrade it again, and it failed again).
It happened on several Mac machines.
maybe related to #3869
Hey @liranmauda thanks for opening this issue, it looks like it could be a bug with the storage provisioner.
Would you be able to provide the k8s files you applied to the cluster so that I could reproduce this issue?
Hi @priyawadhwa
here is the statefulset yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: noobaa-db
labels:
app: noobaa
spec:
replicas: 1
selector:
matchLabels:
noobaa-db: noobaa
serviceName: noobaa-db
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: noobaa
noobaa-db: noobaa
spec:
serviceAccountName: noobaa
initContainers:
#----------------#
# INIT CONTAINER #
#----------------#
- name: init
image: NOOBAA_CORE_IMAGE
command:
- /noobaa_init_files/noobaa_init.sh
- init_mongo
resources:
requests:
cpu: "500m"
memory: "500Mi"
limits:
cpu: "500m"
memory: "500Mi"
volumeMounts:
- name: db
mountPath: /mongo_data
containers:
#--------------------#
# DATABASE CONTAINER #
#--------------------#
- name: db
image: NOOBAA_DB_IMAGE
command:
- bash
- -c
- /opt/rh/rh-mongodb36/root/usr/bin/mongod --port 27017 --bind_ip_all --dbpath /data/mongo/cluster/shard1
resources:
requests:
cpu: "2"
memory: "4Gi"
limits:
cpu: "2"
memory: "4Gi"
volumeMounts:
- name: db
mountPath: /data
volumeClaimTemplates:
- metadata:
name: db
labels:
app: noobaa
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
Tell me If you need anything more.
I am running into similar issues. Any updates?
Running into the same issue trying to use: https://github.com/helm/charts/tree/master/stable/mongodb-replicaset
minikube version
minikube version: v1.5.1
commit: 4df684c4cc2bd9dc9979cd5dbb44bdfa410850b4
kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Help would be appreciated, thank you.
I don't know if this helps but I was able to fix the problem by changing my persistent volumes. I am now using something like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-database-pvc
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-database-pv
spec:
storageClassName: local-storage
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/my-database"
type: DirectoryOrCreate
and then you can use the volume in your deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-database-deployment
spec:
selector:
matchLabels:
app: my-database
replicas: 1
template:
metadata:
labels:
app: my-database
spec:
containers:
- name: my-database
image: my-database-image:latest
volumeMounts:
- name: persistent-db-storage
mountPath: /my-database/mount-path
volumes:
- name: persistent-db-storage
persistentVolumeClaim:
claimName: my-database-pvc
Update: after about 20 minutes or so, the issue resolved itself.
@yoavcloud ok great :)
If someone runs into this, could they please provide the output of minikube logs? It would be helpful to see the storage provisioner state. Thanks!
funny thing that the same deployment works on real k8s cluster
@tstromberg output from minikube logs shown below:
==> storage-provisioner [0cc0c93bc9b8] <==
E0512 05:02:36.173614 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: v1.PersistentVolumeClaimList: Items: []v1.PersistentVolumeClaim: v1.PersistentVolumeClaim: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 1010 ...:{},"k:{\"... at {"kind":"PersistentVolumeClaimList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/persistentvolumeclaims","resourceVersion":"46363"},"items":[{"metadata":{"name":"keycloak-postgresql-claim","namespace":"keycloak","selfLink":"/api/v1/namespaces/keycloak/persistentvolumeclaims/keycloak-postgresql-claim","uid":"faa87fe5-b2e5-4b39-b4e7-2eafe3b9ce63","resourceVersion":"46363","creationTimestamp":"2020-05-12T04:46:34Z","labels":{"app":"keycloak"},"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"k8s.io/minikube-hostpath"},"ownerReferences":[{"apiVersion":"keycloak.org/v1alpha1","kind":"Keycloak","name":"example-keycloak","uid":"a1899913-3ca1-45a0-86f3-e0c721ceea06","controller":true,"blockOwnerDeletion":true}],"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"keycloak-operator","operation":"Update","apiVersion":"v1","time":"2020-05-12T04:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1899913-3ca1-45a0-86f3-e0c721ceea06\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}},"f:status":{"f:phase":{}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-05-12T04:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{}}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard","volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}
the line above is repeated 100s of times.
Additionally, the following line exists at the tail end of the logs:
==> storage-provisioner [5b046032cebd] <==
F0512 02:44:13.940167 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
This is caused by the addition of managedFields in v1.18.0 beta 2 [1]. More details about the issue can be found in https://github.com/kubernetes/kubernetes/issues/89080. I've found that minikube config set kubernetes-version v1.16.0 is a work around for now. Or, creating the pv before the creating the depenent resource also works.
In the first log excerpt that I pasted above, you can see the managedFields cannot be parsed by r2d4's fork of external-storage. But, looking at the source here, it seems like the changes have already been updated.
@tstromberg based on your comment in #3628, does gcr.io/k8s-minikube/storage-provisioner still need to be updated?
[1] https://kubernetes.io/blog/2020/04/01/kubernetes-1.18-feature-server-side-apply-beta-2/
tried to use workaround ( minikube config set kubernetes-version v1.16.0 ) or create PV first - neither did help on archlinux / minikube 1.9.2 - will try 1.10 laters
Hi.
I am facing a similar issue in other helm charts. It seems it is something related with setting up the finalizers in the PVC configuration. Using finalizers: {} creates the pvc correctly and adds the default finalizer later:
finalizers:
- kubernetes.io/pvc-protection
Hope it can help to identify the issue.
@tstromberg output from
minikube logsshown below:==> storage-provisioner [0cc0c93bc9b8] <== E0512 05:02:36.173614 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: v1.PersistentVolumeClaimList: Items: []v1.PersistentVolumeClaim: v1.PersistentVolumeClaim: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 1010 ...:{},"k:{\"... at {"kind":"PersistentVolumeClaimList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/persistentvolumeclaims","resourceVersion":"46363"},"items":[{"metadata":{"name":"keycloak-postgresql-claim","namespace":"keycloak","selfLink":"/api/v1/namespaces/keycloak/persistentvolumeclaims/keycloak-postgresql-claim","uid":"faa87fe5-b2e5-4b39-b4e7-2eafe3b9ce63","resourceVersion":"46363","creationTimestamp":"2020-05-12T04:46:34Z","labels":{"app":"keycloak"},"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"k8s.io/minikube-hostpath"},"ownerReferences":[{"apiVersion":"keycloak.org/v1alpha1","kind":"Keycloak","name":"example-keycloak","uid":"a1899913-3ca1-45a0-86f3-e0c721ceea06","controller":true,"blockOwnerDeletion":true}],"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"keycloak-operator","operation":"Update","apiVersion":"v1","time":"2020-05-12T04:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a1899913-3ca1-45a0-86f3-e0c721ceea06\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}},"f:status":{"f:phase":{}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-05-12T04:46:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{}}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard","volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}the line above is repeated 100s of times.
Additionally, the following line exists at the tail end of the logs:
==> storage-provisioner [5b046032cebd] <== F0512 02:44:13.940167 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
I have the same in my logs. Is there any update or workaround to this ?
I have learned that minikube after version 1.8.2 uses a docker driver instead of virtual machine driver by default. And that there saving and restoring of data is not implemented for the related directories yet.
So if you start minikube with explicit driver selection as minikube start --driver=virtualbox it could help also in this case. Please check it and post the results here.
(See https://github.com/kubernetes/minikube/issues/8458)
I am using hyperkit, and it is not working:
cat ~/.minikube/config/config.json
{
"cpus": 6,
"memory": 3000,
"driver": "hyperkit"
}
$ minikube version
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
md5-53ed983ae6a5a8d473442248f81940bb
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:52:00Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
md5-dcce3e1e2123a8b255b2ff82e6e9b43e
$kubectl describe pod/noobaa-db-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 18s (x17 over 20m) default-scheduler running "VolumeBinding" filter plugin for pod "noobaa-db-0": pod has unbound immediate PersistentVolumeClaims
md5-47ef781d8660ab8185d151dc4cb7ed77
$kubectl describe pvc/db-noobaa-db-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 85s (x82 over 21m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
I get the same with creating an elasticsearch cluster using the elasticsearch operator
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
kubectl version
Client Version: version.Info
```json
{
"Major":"1",
"Minor":"18",
"GitVersion":"v1.18.4",
"GitCommit":"c96aede7b5205121079932896c4ad89bb93260af",
"GitTreeState":"clean",
"BuildDate":"2020-06-17T11:41:22Z",
"GoVersion":"go1.13.9",
"Compiler":"gc",
"Platform":"linux/amd64"
}
Server Version: version.Info
{
"Major":"1",
"Minor":"18",
"GitVersion":"v1.18.3",
"GitCommit":"2e7996e3e2712684bc73f0dec0200d64eec7fe40",
"GitTreeState":"clean",
"BuildDate":"2020-05-20T12:43:34Z",
"GoVersion":"go1.13.9",
"Compiler":"gc",
"Platform":"linux/amd64"
}
==> storage-provisioner [1a6feb0feced] <==
E0622 20:48:45.221595 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: v1.PersistentVolumeClaimList: Items: []v1.PersistentVolumeClaim: v1.PersistentVolumeClaim: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 1443 ...:{},"k:{\"... at {"kind":"PersistentVolumeClaimList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/persistentvolumeclaims","resourceVersion":"1714"},"items":[{"metadata":{"name":"elasticsearch-data-elasticsearch-es-default-0","namespace":"akashascrolls","selfLink":"/api/v1/namespaces/akashascrolls/persistentvolumeclaims/elasticsearch-data-elasticsearch-es-default-0","uid":"72b99b3f-4989-4a5a-8be3-796bed9f3265","resourceVersion":"1714","creationTimestamp":"2020-06-22T20:38:29Z","labels":{"common.k8s.elastic.co/type":"elasticsearch","elasticsearch.k8s.elastic.co/cluster-name":"elasticsearch","elasticsearch.k8s.elastic.co/statefulset-name":"elasticsearch-es-default"},"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"k8s.io/minikube-hostpath"},"ownerReferences":[{"apiVersion":"elasticsearch.k8s.elastic.co/v1","kind":"Elasticsearch","name":"elasticsearch","uid":"ec3d15b4-4811-4af9-9617-c77aee501a80","controller":true,"blockOwnerDeletion":false}],"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-06-22T20:38:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{}},"f:labels":{".":{},"f:common.k8s.elastic.co/type":{},"f:elasticsearch.k8s.elastic.co/cluster-name":{},"f:elasticsearch.k8s.elastic.co/statefulset-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec3d15b4-4811-4af9-9617-c77aee501a80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}},"f:status":{"f:phase":{}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard","volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}
I'm using the docker driver. I have a mysql deployment with the following PVC and that one gets bound.
I get you hit straight to the point!
The issue is easier if you look at the yaml file, so if you do a kubectl get pvc <name> -o yaml you get in the managed fields something like:
f:ownerReferences:
.: {}
k:{"uid":"690cb65e-c608-4995-97ce-68c7eb7ce3a6"}:
```
which if you translate into json, example `kubectl get pvc <name> -o json` get:
```json
"f:ownerReferences": {
".": {},
"k:{\"uid\":\"39a5cd2c-ad5d-4915-800d-fb27bc2884da\"}": {
".": {},
Which is valid from json perspective, but it seems readObjectFieldAsBytes is not escaping correctly the "" in the field name.
It is indeed #7218
Hello,
I had the same issue while trying to deploy Elasticsearch on Minikube following this guide:
https://www.elastic.co/blog/getting-started-with-elastic-cloud-on-kubernetes-deployment
This is my configuration:
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Thank you very much in advance for your help.
Running into the same issue trying to use: https://github.com/helm/charts/tree/master/stable/mongodb-replicaset
minikube version
minikube version: v1.5.1
commit: 4df684ckubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}Help would be appreciated, thank you.
Hi, I am facing a similar issue in https://github.com/helm/charts/tree/master/stable/mongodb-replicaset,
Started minikube using minikube start --docker-env HTTPS_PROXY="XXXX" --image-repository=XXXXXX
Can someone please help me. TIA
```
kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
coredns-dfd8465f5-gs4sg 1/1 Running 0 53m
coredns-dfd8465f5-s8l98 1/1 Running 0 53m
etcd-dev-pkankar-kubetest001 1/1 Running 0 53m
kube-apiserver-dev-pkankar-kubetest001 1/1 Running 0 53m
kube-controller-manager-dev-pkankar-kubetest001 1/1 Running 0 53m
kube-proxy-wnlqn 1/1 Running 0 53m
kube-scheduler-dev-pkankar-kubetest001 1/1 Running 0 53m
storage-provisioner 0/1 CrashLoopBackOff 15 53m
tiller-deploy-fd7bf95c5-ljdpl 1/1 Running 0 53m
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 59m
[root@dev-pkankar-kubetest001 stackstorm-ha]# kubectl logs storage-provisioner -n kube-system
standard_init_linux.go:190: exec user process caused "exec format error"
[root@dev-pkankar-kubetest001 ~]# kubectl describe pods dull-abalone-mongodb-ha-0
Name: dull-abalone-mongodb-ha-0
Namespace: default
Priority: 0
Node:
Labels: app=mongodb-ha
controller-revision-hash=dull-abalone-mongodb-ha-55cf9f9d7c
release=dull-abalone
statefulset.kubernetes.io/pod-name=dull-abalone-mongodb-ha-0
Annotations: checksum/config: bfadd6051ca83eab31fa7937b3dd8d2d59d39815339c2f94e20823dd8bdf89cf
Status: Pending
IP:
IPs:
Controlled By: StatefulSet/dull-abalone-mongodb-ha
Init Containers:
copy-config:
Image: busybox:1.29.3
Port:
Host Port:
Command:
sh
Args:
-c
set -e
set -x
cp /configdb-readonly/mongod.conf /data/configdb/mongod.conf
cp /keydir-readonly/key.txt /data/configdb/key.txt
chmod 600 /data/configdb/key.txt
Environment:
Mounts:
/configdb-readonly from config (rw)
/data/configdb from configdir (rw)
/keydir-readonly from keydir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-65bnv (ro)
/work-dir from workdir (rw)
install:
Image: unguiculus/mongodb-install:0.7
Port:
Host Port:
Args:
--work-dir=/work-dir
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-65bnv (ro)
/work-dir from workdir (rw)
bootstrap:
Image: mongo:4.0
Port:
Host Port:
Command:
/work-dir/peer-finder
Args:
-on-start=/init/on-start.sh
-service=dull-abalone-mongodb-ha
Environment:
POD_NAMESPACE: default (v1:metadata.namespace)
REPLICA_SET: rs0
TIMEOUT: 900
SKIP_INIT: false
TLS_MODE: requireSSL
AUTH: true
ADMIN_USER:
ADMIN_PASSWORD:
Mounts:
/data/configdb from configdir (rw)
/data/db from datadir (rw)
/init from init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-65bnv (ro)
/work-dir from workdir (rw)
Containers:
mongodb-ha:
Image: mongo:4.0
Port: 27017/TCP
Host Port: 0/TCP
Command:
mongod
Args:
--config=/data/configdb/mongod.conf
--dbpath=/data/db
--replSet=rs0
--port=27017
--bind_ip=0.0.0.0
--auth
--keyFile=/data/configdb/key.txt
Liveness: exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/data/configdb from configdir (rw)
/data/db from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-65bnv (ro)
/work-dir from workdir (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-dull-abalone-mongodb-ha-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dull-abalone-mongodb-ha-mongodb
Optional: false
init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dull-abalone-mongodb-ha-init
Optional: false
keydir:
Type: Secret (a volume populated by a Secret)
SecretName: dull-abalone-mongodb-ha-keyfile
Optional: false
workdir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
configdir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
default-token-65bnv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-65bnv
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 11s (x5 over 60s) default-scheduler running "VolumeBinding" filter plugin for pod "dull-abalone-mongodb-ha-0": pod has unbound immediate PersistentVolumeClaims
```
Hello,
I had the same issue while trying to deploy Elasticsearch on Minikube following this guide:
https://www.elastic.co/blog/getting-started-with-elastic-cloud-on-kubernetes-deploymentThis is my configuration:
minikube version: v1.11.0
commit: 57e2f55$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Thank you very much in advance for your help.
Facing the same issue did you find a solution?
I get the same with creating an elasticsearch cluster using the elasticsearch operator
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.htmlkubectl version Client Version: version.Info ```json { "Major":"1", "Minor":"18", "GitVersion":"v1.18.4", "GitCommit":"c96aede7b5205121079932896c4ad89bb93260af", "GitTreeState":"clean", "BuildDate":"2020-06-17T11:41:22Z", "GoVersion":"go1.13.9", "Compiler":"gc", "Platform":"linux/amd64" }Server Version: version.Info
{ "Major":"1", "Minor":"18", "GitVersion":"v1.18.3", "GitCommit":"2e7996e3e2712684bc73f0dec0200d64eec7fe40", "GitTreeState":"clean", "BuildDate":"2020-05-20T12:43:34Z", "GoVersion":"go1.13.9", "Compiler":"gc", "Platform":"linux/amd64" }==> storage-provisioner [1a6feb0feced] <== E0622 20:48:45.221595 1 reflector.go:205] k8s.io/minikube/vendor/github.com/r2d4/external-storage/lib/controller/controller.go:411: Failed to list *v1.PersistentVolumeClaim: v1.PersistentVolumeClaimList: Items: []v1.PersistentVolumeClaim: v1.PersistentVolumeClaim: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 1443 ...:{},"k:{\"... at {"kind":"PersistentVolumeClaimList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/persistentvolumeclaims","resourceVersion":"1714"},"items":[{"metadata":{"name":"elasticsearch-data-elasticsearch-es-default-0","namespace":"akashascrolls","selfLink":"/api/v1/namespaces/akashascrolls/persistentvolumeclaims/elasticsearch-data-elasticsearch-es-default-0","uid":"72b99b3f-4989-4a5a-8be3-796bed9f3265","resourceVersion":"1714","creationTimestamp":"2020-06-22T20:38:29Z","labels":{"common.k8s.elastic.co/type":"elasticsearch","elasticsearch.k8s.elastic.co/cluster-name":"elasticsearch","elasticsearch.k8s.elastic.co/statefulset-name":"elasticsearch-es-default"},"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"k8s.io/minikube-hostpath"},"ownerReferences":[{"apiVersion":"elasticsearch.k8s.elastic.co/v1","kind":"Elasticsearch","name":"elasticsearch","uid":"ec3d15b4-4811-4af9-9617-c77aee501a80","controller":true,"blockOwnerDeletion":false}],"finalizers":["kubernetes.io/pvc-protection"],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-06-22T20:38:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volume.beta.kubernetes.io/storage-provisioner":{}},"f:labels":{".":{},"f:common.k8s.elastic.co/type":{},"f:elasticsearch.k8s.elastic.co/cluster-name":{},"f:elasticsearch.k8s.elastic.co/statefulset-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec3d15b4-4811-4af9-9617-c77aee501a80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}},"f:status":{"f:phase":{}}}}]},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard","volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}I'm using the docker driver. I have a mysql deployment with the following PVC and that one gets bound.
Im facing the same issue, did you find a solution?
My $0.02:
After some digging, I found that the storage-provisioner addon was failing to create it's pod for some reason. Even though minikube addons list showed that it was enabled.
After kicking the addon: minikube addons enable storage-provisioner, it spun up and immediately created the PVs and everything started ticking like a well-oiled machine.
Not guaranteed to work for everyone. But worked for me.
Update: needed to downgrade to k8s version 1.16 for this to work
Explicitly creating a PV as described here worked for me. https://stackoverflow.com/a/62894931
minikube version: v1.12.1
commit: 5664228288552de9f3a446ea4f51c6f29bbdd0e0
K8S
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T00:04:31Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Explicitly creating a PV as described here worked for me. https://stackoverflow.com/a/62894931
minikube version: v1.12.1
commit: 5664228K8S
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T00:04:31Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
thank for updating with the work arround,
I would love to see a tutorial on minikube for how to do a basic PV that works !
@martinmalek would you would be interested to share your example it our website tutorial? https://minikube.sigs.k8s.io/docs/tutorials/
we have an integration test for PV we should ensure that it covers this case
I'm running into this issue with minikube v1.12.1 running k8s 1.18.3:
Warning FailedScheduling 7m51s (x3 over 7m52s) default-scheduler running "VolumeBinding" filter plugin for pod "roach-test-cockroachdb-2": pod
has unbound immediate PersistentVolumeClaims
It looks like it's due to permissions issues:
==> storage-provisioner [7e735da44478] <==
...
E0804 19:42:56.707424 1 controller.go:682] Error watching for provisioning success, can't provision for claim "default/datadir-roach-test-cockroachdb-2": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resource "events" in API group "" in the namespace "default"
You can repro by installing cockroachdb via helm: https://www.cockroachlabs.com/docs/stable/orchestrate-a-local-cluster-with-kubernetes.html
also experiencing this with any helm chart that requires a PV (redis-ha, rabbitmq-ha, prometheus, grafana).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Most helpful comment
Hello,
I had the same issue while trying to deploy Elasticsearch on Minikube following this guide:
https://www.elastic.co/blog/getting-started-with-elastic-cloud-on-kubernetes-deployment
This is my configuration:
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Thank you very much in advance for your help.