1. What kops version are you running? The command kops version, will display
this information.
$ kops version
Version 1.14.0 (git-d5078612f)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-07T14:30:40Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: EOF
3. What cloud provider are you using?
aws
4. What commands did you run? What is the simplest way to reproduce this issue?
kops replace cluster -f test-cluster-spec.yaml
error: error fetching cluster "REDUCTED": error reading cluster configuration "REDUCTED": error parsing s3://REDUCTED/kops/REDUCTED/config: no kind "Cluster" is registered for version "kops.k8s.io/v1alpha2" in scheme "k8s.io/kops/pkg/kopscodecs/codecs.go:33"
the file from the error above is the following:
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: null
generation: 1
name: REDUCTED
spec:
additionalPolicies:
master: |
[
{
"Effect": "Allow",
"Action": ["sts:AssumeRole"],
"Resource": "*"
}
]
node: |
[
{
"Effect": "Allow",
"Action": ["sts:AssumeRole"],
"Resource": "*"
}
]
addons:
- manifest: s3://REDUCTED/kops/REDUCTED/addons/test-cluster-addons.yaml
api:
loadBalancer:
sslCertificate: REDUCTED
type: Public
authentication:
aws: {}
authorization:
rbac: {}
channel: stable
cloudLabels:
env: test
cloudProvider: aws
configBase: s3://REDUCTED/kops/REDUCTED
dnsZone: REDUCTED
etcdClusters:
- etcdMembers:
- encryptedVolume: true
instanceGroup: master-us-east-1f
name: a
name: main
version: 3.3.10
- etcdMembers:
- encryptedVolume: true
instanceGroup: master-us-east-1f
name: a
name: events
version: 3.3.10
iam:
allowContainerRegistry: true
legacy: false
kubeDNS:
provider: CoreDNS
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
resolvConf: /run/systemd/resolve/resolv.conf
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.14.2
masterInternalName: REDUCTED
masterPublicName: REDUCTED
networkCIDR: REDUCTED
networking:
amazonvpc: {}
nonMasqueradeCIDR: REDUCTED
sshAccess:
- REDUCTED
subnets:
- cidr: REDUCTED
name: us-east-1f
type: Private
zone: us-east-1f
- cidr: REDUCTED
name: utility-us-east-1f
type: Utility
zone: us-east-1f
topology:
bastion:
bastionPublicName: REDUCTED
dns:
type: Public
masters: private
nodes: private
updatePolicy: external
5. What happened after the commands executed?
An error message shows up:
error: error fetching cluster "REDUCTED": error reading cluster configuration "REDUCTED": error parsing s3://REDUCTED/kops/REDUCTED/config: no kind "Cluster" is registered for version "kops.k8s.io/v1alpha2" in scheme "k8s.io/kops/pkg/kopscodecs/codecs.go:33"
6. What did you expect to happen?
the config file replaced
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
same error when trying to do so
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
I1016 22:09:31.071588 14313 factory.go:68] state store s3://REDUCTED/kops
I1016 22:09:31.077357 14313 aws_cloud.go:1229] Querying EC2 for all valid zones in region "us-east-1"
I1016 22:09:32.283457 14313 request_logger.go:45] AWS request: ec2/DescribeAvailabilityZones
I1016 22:09:33.537627 14313 status.go:57] Querying AWS for etcd volumes
I1016 22:09:33.537722 14313 status.go:68] Listing EC2 Volumes
I1016 22:09:33.538224 14313 request_logger.go:45] AWS request: ec2/DescribeVolumes
I1016 22:09:34.129179 14313 status.go:40] Cluster status (from cloud): {}
I1016 22:09:34.129353 14313 s3context.go:325] unable to read /sys/devices/virtual/dmi/id/product_uuid, assuming not running on EC2: open /sys/devices/virtual/dmi/id/product_uuid: permission denied
I1016 22:09:34.129411 14313 s3context.go:170] defaulting region to "us-east-1"
I1016 22:09:35.559976 14313 s3context.go:210] found bucket in region "us-east-1"
I1016 22:09:35.560084 14313 s3fs.go:220] Reading file "s3://REDUCTED/kops/REDUCTED/config"
F1016 22:09:36.791918 14313 helpers.go:116] error: error fetching cluster "REDUCTED": error reading cluster configuration "REDUCTED": error parsing s3://REDUCTED/kops/REDUCTED/config: no kind "Cluster" is registered for version "kops.k8s.io/v1alpha2" in scheme "k8s.io/kops/pkg/kopscodecs/codecs.go:33"
9. Anything else do we need to know?
Hi @tomklino can you try downloading the config file mentioned in your logs, editing the apiVersion from kops.k8s.io/v1alpha2 to kops/v1alpha2 and reuploading it back to S3? You'll need to do the same with each file in s3://REDUCTED/kops/REDUCTED/instancegroup as well.
different parsing error as a result:
error: unable to check for instanceGroup: error reading InstanceGroup "algo": error parsing s3://REDUCTED/kops/REDUCTED/instancegroup/algo: no kind "InstanceGroup" is registered for version "kops.k8s.io/v1alpha2" in scheme "k8s.io/kops/pkg/kopscodecs/codecs.go:33"
Missed the second part of your message. did that and it run. stuck on something else now.
Still, what happened that required such a manual update? is there a way to tell? is this a known bug/issue?
for what its worth I also got this error when creating a cluster with the 1.16.0-alpha.1 release. Editing the apiVersion from kops.k8s.io/v1alpha2 to kops/v1alpha2 in the following files in the state store worked for me:
Happens to me and I discovered that my current kops is lower version than the kops I used to deploy the cluster.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I had the same problem after downgrading kops from v.16.2 to v1.14.1.
error reading cluster configuration: error reading cluster configuration "cluster.example.com": error parsing s3://cluster-example-state/cluster.example.com/config: no kind "Cluster" is registered for version "kops.k8s.io/v1alpha2" in scheme "k8s.io/kops/pkg/kopscodecs/codecs.go:33"
This was fixed by following @rifelpet's solution. But in my case I only had to modify the config file. (kops.k8s.io/v1alpha2 --> kops/v1alpha2)
Most helpful comment
for what its worth I also got this error when creating a cluster with the 1.16.0-alpha.1 release. Editing the apiVersion from kops.k8s.io/v1alpha2 to kops/v1alpha2 in the following files in the state store worked for me: