I've been trying to use admission control with kops. The documentation explains that I need to pass a flag to api-server (https://kubernetes.io/docs/admin/admission-controllers/#how-do-i-turn-on-an-admission-control-plug-in), but I don't really understand where I can pass these flags.
How can I pass runtime flags to the API server in kops? I specifically want this to pass my own array of admission controllers.
I'm currently going through the user data script for the nodes and seems like maybe I'd want to do something either here:
# We can't run in the foreground because of https://github.com/docker/docker/issues/23793
( cd ${INSTALL_DIR}; ./nodeup --install-systemd-unit --conf=${INSTALL_DIR}/kube_env.yaml --v=8 )
or in the kube_env.yml, but I wasn't able to find any documentation for this:
cat > kube_env.yaml << __EOF_KUBE_ENV
Assets:
- bad424eee321f4c9b2b800d44de2e1789843da19@https://storage.googleapis.com/kubernetes-release/release/v1.7.2/bin/linux/amd64/kubelet
- ce8802dccc1aa5cffa15a04eee8326ba5c911d32@https://storage.googleapis.com/kubernetes-release/release/v1.7.2/bin/linux/amd64/kubectl
- 1d9788b0f5420e1a219aad2cb8681823fc515e7c@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
- 5d95d64d7134f202ba60b1fa14adaff138905d15@https://kubeupv2.s3.amazonaws.com/kops/1.7.0/linux/amd64/utils.tar.gz
ClusterName: k8s.jorge-test-cluster.jorge.fail
ConfigBase: s3://jorge-k8s-test-bucket/k8s.jorge-test-cluster.jorge.fail
InstanceGroupName: master-us-west-2a
Tags:
- _automatic_upgrades
- _aws
- _kubernetes_master
channels:
- s3://jorge-k8s-test-bucket/k8s.jorge-test-cluster.jorge.fail/addons/bootstrap-channel.yaml
protokubeImage:
hash: 5bd97a02f0793d1906e9f446c548ececf1444737
name: protokube:1.7.0
source: https://kubeupv2.s3.amazonaws.com/kops/1.7.0/images/protokube.tar.gz
__EOF_KUBE_ENV
Seems like the solution is:
kops edit cluster ....
Then adding the following:
spec:
kubeAPIServer:
admissionControl:
- NamespaceLifecycle
- LimitRanger
- ServiceAccount
- PersistentVolumeLabel
- DefaultStorageClass
- ResourceQuota
- PodSecurityPolicy
- DefaultTolerationSeconds
Then kops update cluster ...
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
Seems like the solution is:
Then adding the following:
spec: kubeAPIServer: admissionControl: - NamespaceLifecycle - LimitRanger - ServiceAccount - PersistentVolumeLabel - DefaultStorageClass - ResourceQuota - PodSecurityPolicy - DefaultTolerationSecondsThen
kops update cluster ...