Both from 1.12 and from 1.13 the new master unable to join the cluster.
1. What kops version are you running? The command kops version, will display
this information.
Version 1.14.0 (git-d5078612f)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.11", GitCommit:"25074a190ef2a07d8b0ed38734f2cb373edfb868", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:46Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
3. What cloud provider are you using?
aws
4. What commands did you run? What is the simplest way to reproduce this issue?
kops upgrade cluster --yes
kops rolling-update cluster --yes
5. What happened after the commands executed?
VALIDATION ERRORS
KIND NAME MESSAGE
Machine i-0451sdfd0754fafef6 machine "i-0451sdfd0754fafef6" has not yet joined cluster
Validation Failed
6. What did you expect to happen?
```
To migrate cluster from 1.13.11 to 1.14.6
@ysaakpr Did you find any resolution to this issue yet? I am facing the same error.
I stopped the migration and reverted to 1.13 . build
Facing the same issue.
In kube-apiserver.log i see this.
error: enable-admission-plugins plugin "Initializers" is unknown
kube-scheduler.log
reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://127.0.0.1/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused
in kube-controller-manager.log
1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:443: connect: connection refused
Initializers where deprecated in 1.14. https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ . You have to disable the admission plugin.
Remove:
spec:
kubeAPIServer:
enableAdmissionPlugins:
- Initializers
It is always a good idea to have a small test cluster to check migrations across minor k8s releases. Figured it out myself there :)
Initializers where deprecated in 1.14. https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ . You have to disable the admission plugin.
Remove:
spec: kubeAPIServer: enableAdmissionPlugins: - InitializersIt is always a good idea to have a small test cluster to check migrations across minor k8s releases. Figured it out myself there :)
my issue exactly ... thanks for making me look at it :D
Mostly this would be my issue as well. But not yet updated to latest. I could only update once i have done with the migration
Had the same issue, but my cluster validated the masters and then they stopped working. Getting the cluster definition and deleting initializers from the enableAdmissionPlugins as @faheem-cliqz sugested did the trick. kube-api started working well again.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Initializers where deprecated in 1.14. https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ . You have to disable the admission plugin.
Remove:
It is always a good idea to have a small test cluster to check migrations across minor k8s releases. Figured it out myself there :)