kops version are you running? The command kops version, will displaykops version
Version 1.8.1 (git-94ef202)
kubectl version will print thekops flag.kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:59:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
AWS
kops create -f cluster.yaml
The cluster starts to provision but fails during protokube setting up volumes a full log is:
protokube version 0.1
I0327 20:34:09.057938 17871 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata
I0327 20:34:09.060970 17871 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata
I0327 20:34:09.061784 17871 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata
I0327 20:34:09.064735 17871 aws_volume.go:72] AWS API Request: ec2/DescribeInstances
I0327 20:34:09.164567 17871 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes
I0327 20:34:09.164590 17871 dnscontroller.go:101] starting DNS controller
I0327 20:34:09.164631 17871 dnscache.go:75] querying all DNS zones (no cached results)
I0327 20:34:09.164780 17871 route53.go:50] AWS request: route53 ListHostedZones
I0327 20:34:09.294527 17871 volume_mounter.go:254] Trying to mount master volume: "vol-0da35d47f283ecf0f"
I0327 20:34:09.294705 17871 aws_volume.go:72] AWS API Request: ec2/AttachVolume
I0327 20:34:09.616691 17871 aws_volume.go:396] AttachVolume request returned {
AttachTime: 2018-03-27 20:34:09.6 +0000 UTC,
Device: "/dev/xvdu",
InstanceId: "i-0975b8438670d96ad",
State: "attaching",
VolumeId: "vol-0da35d47f283ecf0f"
}
I0327 20:34:09.616892 17871 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes
I0327 20:34:09.729976 17871 volume_mounter.go:254] Trying to mount master volume: "vol-0a6aa23bbe82ef00d"
I0327 20:34:09.730102 17871 aws_volume.go:72] AWS API Request: ec2/AttachVolume
I0327 20:34:10.052057 17871 aws_volume.go:396] AttachVolume request returned {
AttachTime: 2018-03-27 20:34:10.04 +0000 UTC,
Device: "/dev/xvdv",
InstanceId: "i-0975b8438670d96ad",
State: "attaching",
VolumeId: "vol-0a6aa23bbe82ef00d"
}
I0327 20:34:10.052295 17871 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes
I0327 20:34:10.137236 17871 volume_mounter.go:273] Currently attached volumes: [{"ID":"vol-0da35d47f283ecf0f","LocalDevice":"/dev/xvdu","AttachedTo":"","Mountpoint":"","Statu$
":"available","Info":{"Description":"vol-0da35d47f283ecf0f","EtcdClusters":[{"clusterKey":"main","nodeName":"a","nodeNames":["a","c","d"]}]}} {"ID":"vol-0a6aa23bbe82ef00d","Loc$
lDevice":"/dev/xvdv","AttachedTo":"","Mountpoint":"","Status":"available","Info":{"Description":"vol-0a6aa23bbe82ef00d","EtcdClusters":[{"clusterKey":"events","nodeName":"a","n$
deNames":["a","c","d"]}]}}]
I0327 20:34:10.137353 17871 volume_mounter.go:58] Master volume "vol-0da35d47f283ecf0f" is attached at "/dev/xvdu"
I0327 20:34:10.137409 17871 volume_mounter.go:72] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-0da35d47f283ecf0f
I0327 20:34:10.137433 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:10.137444 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:11.137695 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:11.137724 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:12.137963 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:12.137995 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:13.138238 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:13.138268 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:14.138491 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:14.138520 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached I0327 20:34:15.138760 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:15.138789 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:16.139009 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:16.139479 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:17.139703 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:17.139732 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:18.139975 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:18.140004 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:19.140221 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:19.140249 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:20.140492 17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:20.140520 17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:21.140885 17871 volume_mounter.go:110] Found volume "vol-0da35d47f283ecf0f" mounted at device "/dev/xvdu"
I0327 20:34:21.142975 17871 volume_mounter.go:147] Creating mount directory "/rootfs/mnt/master-vol-0da35d47f283ecf0f"
I0327 20:34:21.143147 17871 volume_mounter.go:152] Mounting device "/dev/xvdu" on "/mnt/master-vol-0da35d47f283ecf0f"
I0327 20:34:21.143165 17871 mount_linux.go:366] Checking for issues with fsck on disk: /dev/xvdu
I0327 20:34:21.213260 17871 mount_linux.go:379] `fsck` error fsck from util-linux 2.23.2
fsck.ext2: Bad magic number in super-block while trying to open /dev/xvdu
/dev/xvdu:
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
I0327 20:34:21.213303 17871 mount_linux.go:384] Attempting to mount disk: /dev/xvdu /mnt/master-vol-0da35d47f283ecf0f
I0327 20:34:21.343827 17871 mount_linux.go:430] Attempting to determine if disk "/dev/xvdu" is formatted using lsblk with args: ([-n -o FSTYPE /dev/xvdu])
I0327 20:34:21.351350 17871 mount_linux.go:433] Output: "\n"
I0327 20:34:21.351379 17871 mount_linux.go:404] Disk "/dev/xvdu" appears to be unformatted, attempting to format as type: "ext4" with options: [-F /dev/xvdu]
I0327 20:34:22.648419 17871 mount_linux.go:408] Disk successfully formatted (mkfs): ext4 - /dev/xvdu /mnt/master-vol-0da35d47f283ecf0f
I0327 20:34:22.681840 17871 mount_linux.go:168] Detected OS with systemd
W0327 20:34:22.682674 17871 volume_mounter.go:76] unable to mount master volume: "device already mounted at /rootfs/mnt/master-vol-0da35d47f283ecf0f, but is /dev/xvdu and we want /rootfs/dev/xvdu"
remount the partitions.
I used this exact same config about two weeks ago to provision a cluster and that went fine, I'm not sure what changed if anything.
Please provide your cluster manifest. Execute
kops get --name my.example.com -oyaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
Anything else do we need to know?
If I go into the container manually umount and remount the path as it expects it works and finishes bootstrapping the cluster.
We noticed the same thing when our kubelet nodes went from CoreOS-stable-1632.3.0 to CoreOS-stable-1688.4.0
@ctbarrett we've experienced exactly the same issue with the latest stable coreos. The following one liner does the trick but would prefer a proper fix:
docker exec $(docker ps -qf "ancestor=protokube:1.8.0") bash -c 'umount /rootfs/mnt/*'
My issues where on centos, but I solved them in a similar manner to @Artiax
I am running into the exact same issue, also running kops 1.8.1 in AWS. However, I am not using coreos - I'm using the Debian base images. I tried the docker exec... one liner, but that did not fix the issue. Did you find a solution to this problem?
The output I get is as follows:
Mar 30 22:30:37 ip-172-31-1-13 systemd[1]: Starting Kubernetes Protokube Service...
Mar 30 22:30:37 ip-172-31-1-13 systemd[1]: Started Kubernetes Protokube Service.
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: protokube version 0.1
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.674785 1576 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.676202 1576 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.676962 1576 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.678841 1576 aws_volume.go:72] AWS API Request: ec2/DescribeInstances
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.781307 1576 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.781410 1576 dnscontroller.go:101] starting DNS controller
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.781437 1576 dnscache.go:75] querying all DNS zones (no cached results)
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.782471 1576 route53.go:50] AWS request: route53 ListHostedZones
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.911045 1576 volume_mounter.go:254] Trying to mount master volume: "vol-0f1a04a36c6baaaae"
Mar 30 22:30:37 ip-172-31-1-13 docker[1546]: I0330 22:30:37.911510 1576 aws_volume.go:72] AWS API Request: ec2/AttachVolume
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.184418 1576 aws_volume.go:396] AttachVolume request returned {
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: AttachTime: 2018-03-30 22:30:38.164 +0000 UTC,
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: Device: "/dev/xvdu",
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: InstanceId: "i-0d46fbb1317501ac0",
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: State: "attaching",
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: VolumeId: "vol-0f1a04a36c6baaaae"
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: }
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.184693 1576 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.268938 1576 volume_mounter.go:254] Trying to mount master volume: "vol-020d90a464b55678f"
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.269163 1576 aws_volume.go:72] AWS API Request: ec2/AttachVolume
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.560245 1576 aws_volume.go:396] AttachVolume request returned {
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: AttachTime: 2018-03-30 22:30:38.543 +0000 UTC,
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: Device: "/dev/xvdv",
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: InstanceId: "i-0d46fbb1317501ac0",
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: State: "attaching",
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: VolumeId: "vol-020d90a464b55678f"
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: }
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.560654 1576 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.658480 1576 volume_mounter.go:273] Currently attached volumes: [{"ID":"vol-0f1a04a36c6baaaae","LocalDevice":"/dev/xvdu","AttachedTo":"","Mountpoint":"","Status":"available","Info":{"Description":"vol-0f1a04a36c6baaaae","EtcdClusters":[{"clusterKey":"main","nodeName":"b","nodeNames":["a","b","c"]}]}} {"ID":"vol-020d90a464b55678f","LocalDevice":"/dev/xvdv","AttachedTo":"","Mountpoint":"","Status":"available","Info":{"Description":"vol-020d90a464b55678f","EtcdClusters":[{"clusterKey":"events","nodeName":"b","nodeNames":["a","b","c"]}]}}]
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.658815 1576 volume_mounter.go:58] Master volume "vol-0f1a04a36c6baaaae" is attached at "/dev/xvdu"
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.659340 1576 volume_mounter.go:72] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-0f1a04a36c6baaaae
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.659365 1576 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f1a04a36c6baaaae"
Mar 30 22:30:38 ip-172-31-1-13 docker[1546]: I0330 22:30:38.659373 1576 volume_mounter.go:107] Waiting for volume "vol-0f1a04a36c6baaaae" to be attached
Mar 30 22:30:39 ip-172-31-1-13 docker[1546]: I0330 22:30:39.659499 1576 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f1a04a36c6baaaae"
Mar 30 22:30:39 ip-172-31-1-13 docker[1546]: I0330 22:30:39.659519 1576 volume_mounter.go:107] Waiting for volume "vol-0f1a04a36c6baaaae" to be attached
Mar 30 22:30:40 ip-172-31-1-13 docker[1546]: I0330 22:30:40.659641 1576 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f1a04a36c6baaaae"
Mar 30 22:30:40 ip-172-31-1-13 docker[1546]: I0330 22:30:40.659660 1576 volume_mounter.go:107] Waiting for volume "vol-0f1a04a36c6baaaae" to be attached
And the Waiting for volume... message loops endlessly.
We actually got a duplicate of this as well. See #4837
Believed fixed by #4849, i.e. will be in 1.9.0-beta.1
However @thereverendtom your issue sounds like there's no NVME support in the image/kernel you're using. If you're using the kops default image, you have to switch from jessie to stretch in the image name. Please do file a new issue, as this one will likely get closed and we at least need to improve docs, if not provide some hints about NVME with incompatible images.
Just filed issue #4873.
@justinsb running the latest release and I still this issue, it may be specific to DigitalOcean though, This is on CoreOS:
I0404 16:18:46.963956 1303 volume_mounter.go:58] Master volume "8d213a45-3822-11e8-98d1-0242ac116a05" is attached at "/dev/disk/by-id/scsi-0DO_Volume_kops-1-etcd-events-dev-asykim-com"
I0404 16:18:46.964028 1303 volume_mounter.go:72] Doing safe-format-and-mount of /dev/disk/by-id/scsi-0DO_Volume_kops-1-etcd-events-dev-asykim-com to /mnt/master-8d213a45-3822-11e8-98d1-0242ac116a05
I0404 16:18:46.964060 1303 volume_mounter.go:110] Found volume "8d213a45-3822-11e8-98d1-0242ac116a05" mounted at device "/dev/disk/by-id/scsi-0DO_Volume_kops-1-etcd-events-dev-asykim-com"
I0404 16:18:46.965063 1303 volume_mounter.go:159] Device already mounted on "/mnt/master-8d213a45-3822-11e8-98d1-0242ac116a05", verifying it is our device
I0404 16:18:46.965093 1303 volume_mounter.go:171] Found existing mount of "/dev/disk/by-id/scsi-0DO_Volume_kops-1-etcd-events-dev-asykim-com" at "/mnt/master-8d213a45-3822-11e8-98d1-0242ac116a05"
I0404 16:18:47.015786 1303 mount_linux.go:202] Detected OS with systemd
W0404 16:18:47.016944 1303 volume_mounter.go:76] unable to mount master volume: "device already mounted at /rootfs/mnt/master-8d213a45-3822-11e8-98d1-0242ac116a05, but is /dev/sdb and we want /rootfs/dev/disk/by-id/scsi-0DO_Volume_kops-1-etcd-events-dev-asykim-com or /dev/disk/by-id/scsi-0DO_Volume_kops-1-etcd-events-dev-asykim-com"
It might because we're not accounting for symlinks somewhere
Issue is still present in 1.9.0-beta.2
In my case etcd never validates either:
kubectl get cs
NAME STATUS MESSAGE ERROR
etcd-1 Unhealthy Get http://127.0.0.1:4002/health: dial tcp 127.0.0.1:4002: getsockopt: connection refused
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
@henrydobson yes this specific bug is present in 1.9.0-beta.2, should be fixed in 1.9.0. Though based on the error you posted it's hard to tell if your issue is directly related to this one.
@andrewsykim I can confirm the bug is not fixed in 1.9.0. Agreed but certainly mounting all of the require etcd volumes has got to be a step in the right direction when trying to fix etcd-events connection refused.
Can you just check if the volume is mounted? This bug is specifically to fix volume mounting so if the volume is mounted it's likely a different problem.
Not all etcd volumes have been mounted. In 3 master mode, the most common mount configuration I see is 2 etcd-main and 1 etcd-events with the remaining 3 volumes “skipped” because protokube believes they are already mounted.
You can ignore my report. This was unrelated to kops and now resolved in 1.9.0.
We have seen the same, or a very similar issue. kops 1.9, kube 1.9.3, debian stretch base image:
May 12 11:04:24 ip-172-22-39-194 docker[4410]: I0512 11:04:24.000871 4441 volume_mounter.go:255] Trying to mount master volume: "vol-0fa0e257fc6
380a0a"
May 12 11:04:24 ip-172-22-39-194 docker[4410]: VolumeId: "vol-0fa0e257fc6380a0a"
May 12 11:04:24 ip-172-22-39-194 docker[4410]: I0512 11:04:24.303251 4441 volume_mounter.go:274] Currently attached volumes: [{"ID":"vol-01ed0d8
ca320e4288","LocalDevice":"/dev/xvdu","AttachedTo":"","Mountpoint":"","Status":"available","Info":{"Description":"vol-01ed0d8ca320e4288","EtcdClust
ers":[{"clusterKey":"main","nodeName":"eu-west-1a","nodeNames":["eu-west-1a","eu-west-1b","eu-west-1c"]}]}} {"ID":"vol-0fa0e257fc6380a0a","LocalDev
ice":"/dev/xvdv","AttachedTo":"","Mountpoint":"","Status":"available","Info":{"Description":"vol-0fa0e257fc6380a0a","EtcdClusters":[{"clusterKey":"
events","nodeName":"eu-west-1a","nodeNames":["eu-west-1a","eu-west-1b","eu-west-1c"]}]}}]
May 12 11:04:25 ip-172-22-39-194 docker[4410]: I0512 11:04:25.485833 4441 volume_mounter.go:58] Master volume "vol-0fa0e257fc6380a0a" is attache
d at "/dev/xvdv"
May 12 11:04:25 ip-172-22-39-194 docker[4410]: I0512 11:04:25.485844 4441 volume_mounter.go:72] Doing safe-format-and-mount of /dev/xvdv to /mnt
/master-vol-0fa0e257fc6380a0a
May 12 11:04:25 ip-172-22-39-194 docker[4410]: I0512 11:04:25.485874 4441 volume_mounter.go:107] Waiting for volume "vol-0fa0e257fc6380a0a" to b
e attached
May 12 11:04:26 ip-172-22-39-194 docker[4410]: I0512 11:04:26.486055 4441 volume_mounter.go:107] Waiting for volume "vol-0fa0e257fc6380a0a" to b
e attached
...
Waiting for volume continued until after a reboot. At the second boot the AWS volume is already attached and things work. This is the etcd-events volume. There are no Wating messages for etcd-main which seems to have successfully attached. It looks like the volume successfully attached, but something failed to either accept, or notice it.
same problem with 1.9.3 here as well.
Tested on 1.9.0, got the same problem.
For everybody who looks for Waiting for volume "vol-......." to be attached (like I was) - this might have to do with m5 (or other 5th gen instances) of AWS as mentioned in #4844 or k8s #49926.
Reverting to m4 instances might do the trick.
Same problem with kops 1.9.2, sadly.
I can confirm that the m5 ec2 generation was our problem too. @flypenguin was correct. m4 Works as expected.
I can also confirm this issue on kops 1.10.0, 1.10.5 and M5 instances.
For regions where only m5 machines are available like paris please use Debian stretch instead of jessie
Switching to Debian Stretch did not solve the issue for us. Protokube still tried to mount volumes that were already mounted.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Got the same issue after enabling etcd-manager with kops 1.11, kubernetes 1.11.6 and t3.medium instances.
Here are the logs of etcd-manager enabled pod:
I0121 16:44:07.707311 4096 mounter.go:162] Device already mounted on "/mnt/master-vol-063a3fde2fe799679", verifying it is our device
I0121 16:44:07.707326 4096 mounter.go:174] Found existing mount of "/dev/xvdu" at "/mnt/master-vol-063a3fde2fe799679"
I0121 16:44:07.707420 4096 mount_linux.go:196] Detected OS without systemd
W0121 16:44:07.708403 4096 mounter.go:79] unable to mount master volume: "device already mounted at /rootfs/mnt/master-vol-063a3fde2fe799679, but is /dev/nvme1n1 and we want /rootfs/dev/xvdu or /dev/xvdu"
This is still an issue, as far as I can tell.
It seems to be only an issue on the masters, nodes can run m5 ok. Haven't tried actual PVs there yet.
Nope, worker nodes have the same problem.
Kops 1.11, kube 1.11.7 and 1.11.8, m4 and m5 instances
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Same issue here, any movement on this?
kops 1.12.2kubernetes 1.12.9kubectl 1.12.10m4 and m5 instancesRan across this today on one of our clusters
kops 1.13.0kubernetes 1.12.10m4 instances/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Is there a fix for this im seeing the same issue:
I0108 18:15:01.379477 16604 mounter.go:162] Device already mounted on "/mnt/master-vol-0bcaeb59316f24030", verifying it is our device
I0108 18:15:01.379487 16604 mounter.go:174] Found existing mount of "/dev/xvdu" at "/mnt/master-vol-0bcaeb59316f24030"
I0108 18:15:01.379528 16604 mount_linux.go:196] Detected OS without systemd
W0108 18:15:01.380196 16604 mounter.go:79] unable to mount master volume: "device already mounted at /rootfs/mnt/master-vol-0bcaeb59316f24030, but is /dev/nvme1n1 and we want /rootfs/dev/xvdu or /dev/xvdu"
I0108 18:15:01.380209 16604 boot.go:49] waiting for volumes
KOPS Version: 1.14.0
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=2331.1.0
BUILD_ID=2019-12-02-2049
PRETTY_NAME="Container Linux by CoreOS 2331.1.0 (Rhyolite)"
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
For everybody who looks for
Waiting for volume "vol-......." to be attached(like I was) - this might have to do with m5 (or other 5th gen instances) of AWS as mentioned in #4844 or k8s #49926.Reverting to m4 instances might do the trick.