According to http://kubernetes.io/docs/user-guide/scheduled-jobs :
You need a working Kubernetes cluster at version >= 1.4, with batch/v2alpha1 API turned on by passing --runtime-config=batch/v2alpha1 while bringing up the API server (see Turn on or off an API version for your cluster for more).
Please provide a way to do that thru Kops on AWS, plzplzplz
I think we鈥檒l have to expose the runtime-config flag. I actually thought I did but I exposed it as a map and this isn't a map option it looks like. I'll take a look.
But is it better _also_ to allow you to specify specific functionality you want enabled. So you could say 鈥渟cheduledJobs: true". And you would get batch/v2alpha1 enabled. But later when scheduled jobs went GA, we would stop adding the flag?
So at least on the first part, this should work if you build from master.
kops edit cluster, and add this to the spec:
kubeAPIServer:
runtimeConfig:
batch/v2alpha1: true
If you kops edit cluster again just to check, it should look like this:
...
etcdClusters:
- etcdMembers:
- name: us-east-1b
zone: us-east-1b
name: main
- etcdMembers:
- name: us-east-1b
zone: us-east-1b
name: events
kubeAPIServer:
runtimeConfig:
batch/v2alpha1: "true"
kubernetesVersion: v1.4.0
...
Then you'll have to force a re-read of the configuration. Easiest way is probably to do a rolling-update of the whole cluster, but the detection is a bit wonky here. (You actually only need to terminate the master instance if you'd prefer to do that)
kops rolling-update cluster --force --yes
When it comes back, if you want to you should can do
kubectl get pods --all-namespaces | grep kube-apiserver
kubectl describe --namespace=kube-system pod kube-apiserver-ip-172-20-85-46.ec2.internal | grep runtime
And you should see --runtime-config=batch/v2alpha1=true
Then I was able to do:
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/user-guide/sj.yaml
Note that your kubectl version for the client must be 1.4 (I think)
Thanks!
Moving this to documenation
Should check the issue #746 too
This is no longer an issue on master, I believe. In summary, after adding the additional properties (note the quotes around "true")
kubeAPIServer:
runtimeConfig:
"batch/v2alpha1": "true"
you must do a forced rolling update (kops rolling-update cluster {cluster} --force --yes), since kops doesn't yet have a way to determine whether or not the process on the master is running with the correct arguments.
Closing
In my case (with AWS) after modifying the cluster I was getting this error:
Cluster.Spec.KubeAPIServer.CloudProvider: Invalid value: "": Did not match cluster CloudProvider
Solution: Add cloudProvider: aws, as shown below:
kubeAPIServer:
cloudProvider: aws
runtimeConfig:
batch/v2alpha1: "true"
Reopening since we need to document this
this seems awfully invasive (recreating the entire cluster, including ec2 instances), just to pass an extra parameter to the apiserver... :/
@astanciu you do not have to recreate the entire cluster. You only need to roll the masters. kops is designed to make the cluster ec2 instances immutable. As we get more support for bare metal, that will change some, not much. See rolling-update options for a specific instance group.
Just as an update, running on kops 1.7.0 and adding the cronjob support as mentioned above, I did not need to add the cloudProvider: aws line and it still worked for me (of course also running on aws). Good to know about just needing to recreate the masters. So for quick reference, this is what one has to run, depending on their master's instnace groups:
rolling-update cluster --force --yes --instance-group master-eu-central-1a,master-eu-central-1b,master-eu-central-1c
https://github.com/kubernetes/kops/issues/618#issuecomment-281121955 is the right answer
If you are using a Kubectl 1.8 CLI with a 1.7.x cluster, then the following command would create the CronJob
kubectl create -f template/cronjob.yaml --validate=false
Otherwise you'll get the error:
error: error validating "templates/cronjob.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"batch", Version:"v2alpha1", Kind:"CronJob"}; if you choose to ignore these errors, turn validation off with --validate=false
Yup. File an issue with #sig-cli
I don't think its a bug since 1.8 expects batch/v1beta1. --validate-false allows to force the API to a 1.7 server.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
So at least on the first part, this should work if you build from master.
kops edit cluster, and add this to the spec:If you
kops edit clusteragain just to check, it should look like this:Then you'll have to force a re-read of the configuration. Easiest way is probably to do a rolling-update of the whole cluster, but the detection is a bit wonky here. (You actually only need to terminate the master instance if you'd prefer to do that)
When it comes back, if you want to you should can do
And you should see
--runtime-config=batch/v2alpha1=trueThen I was able to do:
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/user-guide/sj.yamlNote that your
kubectl versionfor the client must be 1.4 (I think)