Kubespray: Custom flags for apiserver_custom_flags not working

Created on 15 Mar 2019  路  11Comments  路  Source: kubernetes-sigs/kubespray

Environment:

  • Cloud-provider: Azure
  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Linux 3.10.0-693.11.6.el7.x86_64 x86_64
    NAME="Red Hat Enterprise Linux Server"
    VERSION="7.6 (Maipo)"
    ID="rhel"
    ID_LIKE="fedora"
    VARIANT="Server"
    VARIANT_ID="server"
    VERSION_ID="7.6"
    PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
    HOME_URL="https://www.redhat.com/"
    BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"

  • Version of Ansible (ansible --version):
    ansible 2.7.8

Network plugin used:
weave

Varibles in
/roles/kubespray/roles/kubernetes/master/defaults/main.yml
Variables for custom flags

apiserver_custom_flags:
   - "--repair-malformed-updates=false"
   - "--service-account-lookup=true"
   - "--request-timeout=300s"

Output of ansible run:
Expected these flags to be set in kube-apiserver.yaml

Result:
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:

  • command:

    • kube-apiserver

    • --allow-privileged=true

    • --apiserver-count=3

    • --authorization-mode=Node,RBAC

    • --bind-address=0.0.0.0

    • --cloud-config=/etc/kubernetes/cloud_config

    • --cloud-provider=azure

    • --enable-admission-plugins=NodeRestriction,DenyEscalatingExec

    • --endpoint-reconciler-type=lease

    • --insecure-port=0

    • --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP

    • --runtime-config=admissionregistration.k8s.io/v1alpha1

    • --service-node-port-range=30000-32767

    • --storage-backend=etcd3

    • --advertise-address=10.229.33.4

    • --client-ca-file=/etc/kubernetes/ssl/ca.crt

    • --enable-bootstrap-token-auth=true

    • --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem

    • --etcd-certfile=/etc/ssl/etcd/ssl/node-westeurope-devinfra-platform-k8s-masters-02.pem

    • --etcd-keyfile=/etc/ssl/etcd/ssl/node-westeurope-devinfra-platform-k8s-masters-02-key.pem

    • --etcd-servers=https://10.229.32.4:2379,https://10.229.32.5:2379,https://10.229.32.6:2379

    • --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt

    • --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key

    • --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt

    • --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key

    • --requestheader-allowed-names=front-proxy-client

    • --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt

    • --requestheader-extra-headers-prefix=X-Remote-Extra-

    • --requestheader-group-headers=X-Remote-Group

    • --requestheader-username-headers=X-Remote-User

    • --secure-port=6443

    • --service-account-key-file=/etc/kubernetes/ssl/sa.pub

    • --service-cluster-ip-range=10.233.0.0/18

    • --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt

    • --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key

Based on documentation here:

https://github.com/kubernetes-sigs/kubespray/blob/3ce033995f9a637ad183cd42890426f01931ee35/docs/vars.md

The possible vars are:

apiserver_custom_flags
controller_mgr_custom_flags
scheduler_custom_flags
kubelet_custom_flags
kubelet_node_custom_flags

help wanted kinbug lifecyclstale

Most helpful comment

See #5108 , the corresponding variables that should be used are

kube_kubeadm_scheduler_extra_args
kube_kubeadm_apiserver_extra_args
kube_kubeadm_controller_extra_args

All 11 comments

Same issue in kubespray 2.8.1v.

It seems that the variable is referred in kube-apiserver.manifest.j2, line 157-161. But it had no effect.

157 {% if apiserver_custom_flags is string %}
158     - {{ apiserver_custom_flags }}
159 {% else %}
160 {% for flag in apiserver_custom_flags %}
161     - {{ flag }}

I ran into a similar issue when trying to add --feature-gates:... to the apiserver in order to enable use of Digital Ocean's CSI. Ended up having to edit the manifests directly from the cluster, as opposed to using Ansible.

Would be very interested in knowing why these custom flags aren't working properly!

I think some kube-apiserver parameter can be set using predefined ansible variables if possible, for example, service-port-range. I did many tries and investigations but I'm still not sure why parameters are not applied. :(

Same issue here with v2.8.3 and kubeadm enabled.

I suspect the issue is that you are running the cluster.yml playbook to try to make changes to the api server flags on a cluster that is already deployed. This will not work for any vars that are used in the static pod manifests (apiserver, controller manger, scheduler) because these manifests are generated once on cluster creation by kubeadm init. Subsequent runs will skip the kubeadm init phase because it detects that kubeadm has already run.

However, you can force a kubeadm upgrade by running the cluster-upgrade.yml playbook which WILL regenerate the static pod manifests.

Is that something we need to add in the kubeadm templates maybe?

/kind bug
/help

@Miouge1:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

Is that something we need to add in the kubeadm templates maybe?

/kind bug
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

I think it might be safe to always run kubeadm init phase control-plane which would only regenerate the manifests, and skip the other init phases.

This bug seems to have gotten worse in Release 2.10 as the only references I can find for *_custom_flags exist in roles/kubernetes/node/templates/kubelet.kubeadm.env.j2 line 111. In which case the only flags that are supported in 2.10 are kublet_custom_flags and kublet_node_custom_flags.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

See #5108 , the corresponding variables that should be used are

kube_kubeadm_scheduler_extra_args
kube_kubeadm_apiserver_extra_args
kube_kubeadm_controller_extra_args
Was this page helpful?
0 / 5 - 0 ratings