[root@ip-10-226-0-x ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
[root@ip-10-226-0-x ~]# kops version
Version 1.7.1 (git-c69b811)
Hi Team,
We are using kops on AWS platform, we have external NFS server which are are trying to mount on container, so we create pv and pvc but we are getting pending state when creating pvc with below error , kindly let me know how we can fix this issue.
[root@ip-10-226-0-x ~]# kubectl describe pvc nfs-pvprodc --namespace=production
Name: nfs-pvprodc
Namespace: production
StorageClass: gp2
Status: Pending
Volume:
Labels:
Annotations: volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/aws-ebs
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 2s (x385 over 1h) persistentvolume-controller Failed to provision volume with StorageClass "gp2": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported
Second even with ReadWriteOnce also PVC is not accessible from pod container , same is working with kubernetes 1.11 version.
Kindly help to fix both issue.
Regards
Anupam
Maybe i am misunderstanding the question. You are talking about nfs volume, but the kubectl output shows volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/aws-ebs . Looks like that your definition you are targeting aws ebs instead of your nfs server? As far as i know you should create something like this here https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml
https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pvc.yaml
I have created pv & pvc for my NFS server , please find the outcome & yaml , but automatically it is selecting aws-ebs , kindly suggest, i tested for version 1.11 (kubeadmin) same yaml is working without any issue.
No error when run below PV.
pv.yaml -------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pvprod
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /opt/alt
server: 10.226.0.174
readOnly: false
-------------------------pvc.yaml-------------------------------------------
After run state is always showing pending with all messages i shared in first post. is there any way to change storage-provisioner forcefully if it is any bug.
(
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvprodc
namespace: production
spec:
accessModes:
Regards
Anupam Narayan
When I changed same to AccessModes [ReadWriteOnce] than it create a new pv and status changed to bound but with ReadWriteMany it is not working ( first post error) . kindly help We want to use ReadWriteMany for NFS. Why storageclass is gp2 ?
With ReadWriteOnce
[root@ip-10-226-0-178 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pvprod 1Gi RWX Retain Available 3d
pvc-09f88c99-7b94-11e8-b06f-0acd0479e348 1Gi RWO Delete Bound production/nfs-pvprodc gp2 3d
[root@ip-10-226-0-X ~]# kubectl get pvc --namespace=production
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvprodc Bound pvc-09f88c99-7b94-11e8-b06f-0acd0479e348 1Gi RWO gp2 3d
[root@ip-10-226-0-X ~]#
Regards
Anupam
GP2 is the default storage class and that can be only be mounted RWO
There's some NFS examples here:
https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
If i use shared example than i am getting storage class error . I think with 1.7.4 this pvc RWM not supported , kindly confirm which version support AccessModes [ReadWriteMany].
[root@ip-10-226-0-178 ~]# kubectl create -f pvc.yaml
error: error validating "pvc.yaml": error validating data: ValidationError(PersistentVolumeClaim): unknown field "storage" in io.k8s.kubernetes.pkg.api.v1.PersistentVolumeClaim; if you choose to ignore these errors, turn validation off with --validate=false
Did you copy-paste the example?
yes same just copy and paste and run ...
Please repaste your pvc.yaml here and wrap the code in with three backticks so it formats correctly (```)
-------------------pv.yaml-----------------------------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/alt
server: 10.226.0.174
**-------------------------------------- pvc.yaml------------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
Regards
Anupam
You didn't put the backticks so we can't check whether your indentations are wrong.
If you are replying via email, please use the website instead to avoid your client corrupting the message
I've indented your yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
namespace: default
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.226.0.174
path: /opt/alt
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
This runs successfully on my test cluster:
$ kubectl create -f pv.yaml
persistentvolume "nfs" created
persistentvolumeclaim "nfs" created
And produces this pvc:
$ kubectl get pv --all-namespaces | grep nfs
nfs 1Gi RWX Retain Bound default/nfs 35s
And this pv:
$ kubectl get pvc --all-namespaces | grep nfs
default nfs Bound nfs 1Gi RWX 2m
yes , when i used your shared version it works , also got my mistake. my initial version will work in 1.11 only but your shared example can work everywhere.
Thanks a lot.
Really appreciate.
Regards
Anupam
If this is now resolved, please close the issue :)
yes. working
Most helpful comment
If this is now resolved, please close the issue :)