Kind: Appears when using ceph of roook: map failed: (30) Read-only file system

Created on 31 Jul 2019  路  10Comments  路  Source: kubernetes-sigs/kind

What happened:

Binding error occurs in pvc using ceph(rook)

  Warning  FailedMount       33s (x11 over 6m47s)   kubelet, test-control-plane  MountVolume.SetUp failed for volume "pvc-6e0378c5-a995-4ad9-be05-9e99e4a01dbf" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume replicapool/pvc-6e0378c5-a995-4ad9-be05-9e99e4a01dbf: failed to map image replicapool/pvc-6e0378c5-a995-4ad9-be05-9e99e4a01dbf cluster rook-ceph. failed to map image replicapool/pvc-6e0378c5-a995-4ad9-be05-9e99e4a01dbf: Failed to complete 'rbd': exit status 30. . output: rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (30) Read-only file system
  Warning  FailedMount  13s (x3 over 4m44s)  kubelet, test-control-plane  Unable to mount volumes for pod "wordpress-mysql-6cc97b86fc-m9wc7_default(64067a46-f850-4802-bba7-d7ab7b32fd48)": timeout expired waiting for volumes to attach or mount for pod "default"/"wordpress-mysql-6cc97b86fc-m9wc7". list of unmounted volumes=[mysql-persistent-storage]. list of unattached volumes=[mysql-persistent-storage default-token-hshm5]

What you expected to happen:

Pod started successfully.

How to reproduce it (as minimally and precisely as possible):

kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-1.0/cluster/examples/kubernetes/ceph/common.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-1.0/cluster/examples/kubernetes/ceph/operator.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-1.0/cluster/examples/kubernetes/ceph/cluster-test.yaml 
kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/storageclass-test.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-1.0/cluster/examples/kubernetes/mysql.yaml

Anything else we need to know?:

Environment:

  • kind version: v0.4.0
  • Kubernetes version: v1.15.0
  • Docker version: 18.09.6
  • OS : CentOS Linux 7 (Core)
  • kernel: 5.1.11-1.el7.elrepo.x86_64
kinbug lifecyclstale

Most helpful comment

For anyone looking to replicate this and get ceph working on kind, I got ceph running with filesystem provisioner using the following commands. Note: Currently, this will only work on a single-node cluster.

ROOK_VERSION="release-1.1"
# deploy common resources
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/common.yaml"
# deploy ceph operator
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/operator.yaml"
# create a rook ceph cluster
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/cluster-test.yaml"
# create shared file system; source: https://rook.io/docs/rook/v1.1/ceph-filesystem-crd.html
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/filesystem-test.yaml"
# create storageclass for file system
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/storageclass.yaml"
# make it the default storageclass
kubectl patch storageclass "csi-cephfs" -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# test things: example pvc, pod
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/pvc.yaml"
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/pod.yaml"

All 10 comments

well, that's really suspicious

rbd: map failed: (30) Read-only file system

reading this https://github.com/lxc/lxd/issues/2709
seems we have to create a /dev/rbd0 device and map it to the container :thinking:

References:
https://groups.google.com/forum/#!topic/coreos-user/d-ySGISJjjc

/assign

@lework ceph in block mode doesn't seem to work inside kind, however cephfs works https://github.com/rook/rook/blob/master/Documentation/ceph-filesystem.md

Seems that in ceph block needs the sysfs to be writable , need to investigate more

Seems that in ceph block needs the sysfs to be writable , need to investigate more

We are very unlikely to make /sys writeable in the docker nodes. This is a requirement.

Thank you for answering, I will try cephfs.

@lework please let us know if that worked ... and if we should close the issue :)

any update?
/lifecycle stale

no activity for months now...
please re-open or file a new issue if there is further follow up

For anyone looking to replicate this and get ceph working on kind, I got ceph running with filesystem provisioner using the following commands. Note: Currently, this will only work on a single-node cluster.

ROOK_VERSION="release-1.1"
# deploy common resources
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/common.yaml"
# deploy ceph operator
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/operator.yaml"
# create a rook ceph cluster
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/cluster-test.yaml"
# create shared file system; source: https://rook.io/docs/rook/v1.1/ceph-filesystem-crd.html
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/filesystem-test.yaml"
# create storageclass for file system
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/storageclass.yaml"
# make it the default storageclass
kubectl patch storageclass "csi-cephfs" -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# test things: example pvc, pod
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/pvc.yaml"
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/pod.yaml"

For anyone looking to replicate this and get ceph working on kind, I got ceph running with filesystem provisioner using the following commands. Note: Currently, this will only work on a single-node cluster.

```
ROOK_VERSION="release-1.1"

deploy common resources

kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/common.yaml"

deploy ceph operator

kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/operator.yaml"

create a rook ceph cluster

kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/cluster-test.yaml"

create shared file system; source: https://rook.io/docs/rook/v1.1/ceph-filesystem-crd.html

kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/filesystem-test.yaml"

create storageclass for file system

kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/storageclass.yaml"

make it the default storageclass

kubectl patch storageclass "csi-cephfs" -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

test things: example pvc, pod

kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/pvc.yaml"
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/csi/cephfs/pod.yaml"
```pool-test.yaml

Thanks, super helpful!!!
Just a little more info. You also need to create the pool and match it with the storageclass.
kubectl apply -f "https://raw.githubusercontent.com/rook/rook/${ROOK_VERSION}/cluster/examples/kubernetes/ceph/pool-test.yaml"

Was this page helpful?
0 / 5 - 0 ratings