I am getting an error while, kubectl -f create cluster-autoscaler.yml
error: error validating "cluster-autoscaler.yml": error validating data: found invalid field tolerations for v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
==================
$ cat cluster-autoscaler.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
k8s-app: cluster-autoscaler
template:
metadata:
labels:
k8s-app: cluster-autoscaler
annotations:
# For 1.6, we keep the old tolerations in case of a downgrade to 1.5
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated", "value":"master"}]'
spec:
containers:
- name: cluster-autoscaler
image: gcr.io/google_containers/cluster-autoscaler:v0.4.0
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --cloud-provider=aws
- --nodes=1:5:nodes.test-kubernetes.abc.com
env:
- name: AWS_REGION
value: us-west-2
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt
readOnly: true
volumes:
- name: ssl-certs
hostPath:
path: /etc/ssl/certs/ca-certificates.crt
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: "node-role.kubernetes.io/master"
effect: NoSchedule
Thanks @endejoli, looks like we'll have to split this into 1.6 and pre-1.6 versions. As a work around, can you use the 1.5 release? https://raw.githubusercontent.com/kubernetes/kops/release-1.5/addons/cluster-autoscaler/v1.4.0.yaml
@justinsb looks like this answers the fragility question on https://github.com/kubernetes/kops/pull/2288 馃槃
This seems like it may be fairly urgent for 1.6. But if not, I'm happy to take it on. I just have to ramp-up on how the version-selector stuff works.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
Thanks @endejoli, looks like we'll have to split this into 1.6 and pre-1.6 versions. As a work around, can you use the 1.5 release? https://raw.githubusercontent.com/kubernetes/kops/release-1.5/addons/cluster-autoscaler/v1.4.0.yaml
@justinsb looks like this answers the fragility question on https://github.com/kubernetes/kops/pull/2288 馃槃
This seems like it may be fairly urgent for 1.6. But if not, I'm happy to take it on. I just have to ramp-up on how the version-selector stuff works.