Kops: "kops upgrade" says "No upgrade required" but "kops update" tells me to upgrade.

Created on 26 Apr 2017  Â·  27Comments  Â·  Source: kubernetes/kops

I wanted to upgrade our Kubernetes cluster which is running Kubernetes 1.5.2 to 1.5.4, but kops tells me that there is no update required:

> kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.6", GitCommit:"114f8911f9597be669a747ab72787e0bd74c9359", GitTreeState:"clean", BuildDate:"2017-03-28T13:54:20Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
> kops version
Version 1.6.0-alpha.2 (git-d57ceda)
> kops upgrade cluster

No upgrade required

Oddly enough, when I do a kops update cluster kops tells me that Kubernetes 1.5.4 is available and recommends me running kops upgrade cluster

> kops update cluster
Using cluster from kubectl context: k8s.dev.blackwoodseven.com


*********************************************************************************

A new kubernetes version is available: 1.5.4
Upgrading is recommended (try kops upgrade cluster)

More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_k8s.md#1.5.4

*********************************************************************************

I0426 13:28:53.044899   13861 executor.go:91] Tasks: 0 done / 70 total; 33 can run
I0426 13:28:53.740046   13861 executor.go:91] Tasks: 33 done / 70 total; 14 can run
I0426 13:28:54.133920   13861 executor.go:91] Tasks: 47 done / 70 total; 19 can run
I0426 13:28:56.126557   13861 executor.go:91] Tasks: 66 done / 70 total; 4 can run
I0426 13:28:56.279969   13861 executor.go:91] Tasks: 70 done / 70 total; 0 can run
No changes need to be applied

kops rolling-update cluster also reports that there are no rolling update required.

> kops rolling-update cluster
NAME            STATUS  NEEDUPDATE  READY   MIN MAX NODES
master-eu-west-1a   Ready   0       1   1   1   1
nodes           Ready   0       3   3   3   3
nodes-secondary     Ready   0       1   1   1   1
nodes-sre       Ready   0       1   1   1   1

No rolling-update required.

My cluster config looks like this:

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2017-01-24T08:56:32Z
  name: k8s.dev.blackwoodseven.com
spec:
  api:
    dns: {}
  authorization:
    alwaysAllow: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com
  docker:
    logDriver: json-file
    version: 1.12.6
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: eu-west-1a
    name: main
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: eu-west-1a
    name: events
  kubeAPIServer:
    cloudProvider: aws
    runtimeConfig:
      batch/v2alpha1: "true"
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.5.2
  masterInternalName: api.internal.k8s.dev.blackwoodseven.com
  masterPublicName: api.k8s.dev.blackwoodseven.com
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 172.20.32.0/19
    name: eu-west-1a
    type: Public
    zone: eu-west-1a
  - cidr: 172.20.64.0/19
    name: eu-west-1b
    type: Public
    zone: eu-west-1b
  - cidr: 172.20.96.0/19
    name: eu-west-1c
    type: Public
    zone: eu-west-1c
  topology:
    dns:
      type: Public
    masters: public
    nodes: public
lifecyclrotten

All 27 comments


Have you taken a look at the upgrade document?

https://github.com/kubernetes/kops/blob/master/docs/upgrade.md

Yes, I read through the upgrade doc. I'm fairly certain that if I do a Manual upgrade it would work, my issue relates to the Automatic upgrade, which does not allow me to upgrade to 1.5.4

Here's the -v=10 output as well:

> kops upgrade cluster -v=10
I0427 11:16:58.712135   13543 loader.go:354] Config loaded from file /home/frederiknjs/.kube/config
Using cluster from kubectl context: k8s.dev.blackwoodseven.com

I0427 11:16:59.369924   13543 s3context.go:115] Found bucket "kops.blackwoodseven.com" in region "eu-west-1"
I0427 11:16:59.370014   13543 s3fs.go:173] Reading file "s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/config"
I0427 11:16:59.587746   13543 s3fs.go:210] Listing objects in S3 bucket "kops.blackwoodseven.com" with prefix "k8s.dev.blackwoodseven.com/instancegroup/"
I0427 11:16:59.728092   13543 s3fs.go:236] Listed files in s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup: [s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup/master-eu-west-1a s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup/nodes s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup/nodes-secondary s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup/nodes-sre]
I0427 11:16:59.728169   13543 s3fs.go:173] Reading file "s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup/master-eu-west-1a"
I0427 11:16:59.799898   13543 s3fs.go:173] Reading file "s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup/nodes"
I0427 11:16:59.891471   13543 s3fs.go:173] Reading file "s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup/nodes-secondary"
I0427 11:17:00.005531   13543 s3fs.go:173] Reading file "s3://kops.blackwoodseven.com/k8s.dev.blackwoodseven.com/instancegroup/nodes-sre"
I0427 11:17:00.055828   13543 channel.go:92] resolving "stable" against default channel location "https://raw.githubusercontent.com/kubernetes/kops/master/channels/"
I0427 11:17:00.055862   13543 channel.go:97] Loading channel from "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable"
I0427 11:17:00.055875   13543 context.go:126] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable
I0427 11:17:00.196199   13543 channel.go:106] Channel contents: spec:
  images:
    # We put the "legacy" version first, for kops versions that don't support versions ( < 1.5.0 )
    - name: kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21
      providerID: aws
      kubernetesVersion: ">=1.4.0 <1.5.0"
    - name: kope.io/k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09
      providerID: aws
      kubernetesVersion: ">=1.5.0"
    - providerID: gce
      name: "cos-cloud/cos-stable-56-9000-84-2"
  cluster:
    kubernetesVersion: v1.4.8
    networking:
      kubenet: {}
  kubernetesVersions:
  - range: ">=1.5.0"
    recommendedVersion: 1.5.4
    requiredVersion: 1.5.1
  - range: "<1.5.0"
    recommendedVersion: 1.4.8
    requiredVersion: 1.4.2
  kopsVersions:
  - range: ">=1.5.0-alpha1"
    recommendedVersion: 1.5.1
    #requiredVersion: 1.5.1
    kubernetesVersion: 1.5.2
  - range: "<1.5.0"
    recommendedVersion: 1.4.4
    #requiredVersion: 1.4.4
    kubernetesVersion: 1.4.8
I0427 11:17:00.196255   13543 aws_utils.go:38] Querying EC2 for all valid regions
I0427 11:17:01.350634   13543 aws_cloud.go:632] Querying EC2 for all valid zones in region "eu-west-1"
I0427 11:17:01.350943   13543 request_logger.go:45] AWS request: ec2/DescribeAvailabilityZones
I0427 11:17:01.538378   13543 channel.go:260] Kubernetes version "1.5.2" does not match range: >=1.4.0 <1.5.0

No upgrade required

I've hit this as well. It looks like the latest kops stable only targets 1.5.2 and the latest kops unstables target 1.5.4.

Is the update recommended? kops 1.5.3 is giving differing advice.

+1 Yes. I faced the similar issue.

(sorry if this is completely wrong) To me this line looks very strange @FrederikNS

I0427 11:17:01.538378   13543 channel.go:260] Kubernetes version "1.5.2" does not match range: **>=1.4.0 <1.5.0**

I guess that would indicate you are using _kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21_ ?

@BradErz: I recently forced the update to Kubernetes 1.5.4 by editing the cluster config, but no, the cluster ran kope.io/k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09.

I think the cluster was originally created for Kubernetes 1.4, and since upgraded to Kubernetes 1.5, so that might be part of the issue.

Hmmm, maybe its not updated in some of the state files on s3 for that cluster?

Try to sync your K8S_STATE_STORE to a local folder and then grep -r "kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21" .

So this is also blocking us, has anyone managed to do a rolling-update? i'm moving between 1.7.4 to 1.7.6 so minor version release, As we are running two clusters we have the flexibility to switch between both but that is double the cost to move between k8s versions

kops rolling-update cluster ${NAME} --yes --fail-on-validate-error="false" --node-interval=8 --instance-group nodes

which returns with the same issue as above

@grealish If you add a --force it should perform the rolling upgrade, and apply the changes.

I have hit this as well,
➜ my-kubernetes-files git:(master) ✗ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"clean", BuildDate:"2017-05-10T15:48:59Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:30:51Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

@unixhat You didn't mention your kops version. Your included output just shows that your kubectl is out of date with your cluster.

+1 encountered same issue as well

Same issue

W0307 10:16:37.953889 26885 upgrade_cluster.go:152] cluster version "1.9.2" is greater than recommended version "1.8.6"

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

I am seeing this issue now with kops 1.9.2

My cluster is running Kubernetes 1.9.8, kops update tells me that 1.9.9 is available, but kops upgrade says "No upgrade required"

/remove-lifecycle rotten

Upgrading the kops to latest solved my similar issue.

Linux:
wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

If you're running kops 1.9.2, then you'll notice an upgrade message:

$ kops update cluster --name dev-rbac.k8s.local

*********************************************************************************

A new kubernetes version is available: 1.9.9
Upgrading is recommended (try kops upgrade cluster)

More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_k8s.md#1.9.9

*********************************************************************************

triggered by this commit 16 days ago.

Unfortunately the automatic upgrade doesn't work:

$ kops upgrade cluster --name dev-rbac.k8s.local

No upgrade required

because the stable metadata for version 1.9.2 of kops, still recommends Kubernetes 1.9.8 :-(

I'm just going to skip this upgrade and go straight to 1.10

I'm having this issue although I've upgraded kops to 1.10.0.

$ kops update cluster                                                                                                                                                                           

*********************************************************************************

A new kubernetes version is available: 1.10.5
Upgrading is recommended (try kops upgrade cluster)

More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_k8s.md#1.10.5

*********************************************************************************

I0911 11:55:55.061281   84378 executor.go:103] Tasks: 0 done / 73 total; 31 can run
I0911 11:55:57.174699   84378 executor.go:103] Tasks: 31 done / 73 total; 24 can run
I0911 11:55:58.843124   84378 executor.go:103] Tasks: 55 done / 73 total; 16 can run
I0911 11:56:00.439339   84378 executor.go:103] Tasks: 71 done / 73 total; 2 can run
I0911 11:56:00.547022   84378 executor.go:103] Tasks: 73 done / 73 total; 0 can run
No changes need to be applied

Although running kops upgrade cluster returns No upgrade required

With verbose I can see the following:

[...]
  cluster:
    kubernetesVersion: v1.5.8
    networking:
      kubenet: {}
  kubernetesVersions:
  - range: ">=1.10.0"
    recommendedVersion: 1.10.5
    requiredVersion: 1.10.0
  - range: ">=1.9.0"
    recommendedVersion: 1.9.9
    requiredVersion: 1.9.0
  - range: ">=1.8.0"
    recommendedVersion: 1.8.15
    requiredVersion: 1.8.0
  - range: ">=1.7.0"
    recommendedVersion: 1.7.16
    requiredVersion: 1.7.0
  - range: ">=1.6.0"
    recommendedVersion: 1.6.13
    requiredVersion: 1.6.0
  - range: ">=1.5.0"
    recommendedVersion: 1.5.8
    requiredVersion: 1.5.1
  - range: "<1.5.0"
    recommendedVersion: 1.4.12
    requiredVersion: 1.4.2
  kopsVersions:
  - range: ">=1.10.0-alpha.1"
    recommendedVersion: "1.10.0"
    #requiredVersion: 1.10.0
    kubernetesVersion: 1.10.3
  - range: ">=1.9.0-alpha.1"
    recommendedVersion: 1.9.2
    #requiredVersion: 1.9.0
    kubernetesVersion: 1.9.8
  - range: ">=1.8.0-alpha.1"
    recommendedVersion: 1.8.1
    requiredVersion: 1.7.1
    kubernetesVersion: 1.8.13
  - range: ">=1.7.0-alpha.1"
    recommendedVersion: 1.8.1
    requiredVersion: 1.7.1
    kubernetesVersion: 1.7.16
  - range: ">=1.6.0-alpha.1"
    recommendedVersion: 1.8.1
    requiredVersion: 1.7.1
    kubernetesVersion: 1.6.13
  - range: ">=1.5.0-alpha1"
    recommendedVersion: 1.8.1
    requiredVersion: 1.7.1
    kubernetesVersion: 1.5.8
  - range: "<1.5.0"
    recommendedVersion: 1.8.1
    requiredVersion: 1.7.1
    kubernetesVersion: 1.4.12
I0911 11:57:31.393428   84399 aws_cloud.go:984] Querying EC2 for all valid zones in region "eu-west-3"
I0911 11:57:31.393600   84399 request_logger.go:45] AWS request: ec2/DescribeAvailabilityZones
I0911 11:57:31.611041   84399 channel.go:275] Kubernetes version "1.10.3" does not match range: >=1.4.0 <1.5.0
I0911 11:57:31.611077   84399 channel.go:275] Kubernetes version "1.10.3" does not match range: >=1.5.0 <1.6.0
I0911 11:57:31.611090   84399 channel.go:275] Kubernetes version "1.10.3" does not match range: >=1.6.0 <1.7.0
I0911 11:57:31.611102   84399 channel.go:275] Kubernetes version "1.10.3" does not match range: >=1.7.0 <1.8.0
I0911 11:57:31.611113   84399 channel.go:275] Kubernetes version "1.10.3" does not match range: >=1.8.0 <1.9.0
I0911 11:57:31.611124   84399 channel.go:275] Kubernetes version "1.10.3" does not match range: >=1.9.0 <1.10.0

No upgrade required

I don't really get the cluster.kubernetesVersion: v1.5.8 I see in this output as a kubectl version returns 1.10.3 (and all nodes are also running 1.10.3 according to kubectl get nodes

Same here @tgautier, did you manage to upgrade?

Yes, I manually edited the version set by kubernetesVersion: 1.10.3 with kops edit cluster and ran the upgrade.

It went fine but now I'm seeing a warning because this version is not supported and above 1.10.3.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings