Cluster-api: [cabpk] Add Support for Component Config configuration files

Created on 23 Aug 2019  路  19Comments  路  Source: kubernetes-sigs/cluster-api

/kind feature

Describe the solution you'd like
SCL is pushing the adoption of component config objects, and kubeadm already accepts KubeletConfiguration and KubeProxyConfiguration as part of its config file.
I would like to be able to use component config object with CABPK as well.

Anything else you would like to add:
The first possible approach to achieve this is to add KubeletConfiguration and KubeProxyConfiguration types to the KubeadmConfig object, but this comes with pros (e.g. Open API validation) and cons (e.g. more dependencies and dependencies management)

So, I would like to put on the table also some alternative approach, based on the assumption that CABPK only needs to pass component configs to kubeadm (no need of any logic based on the component config content). If this assumption holds, component configs can be treated basically as "special" Files, so we can consider if:

  • Extend current Files Type by adding a ComponentConfig bool that signals to CABPK that this file should be added to the kubeadm config file
  • Add a new []ComponentConfigFile field to the KubeadmConfig object, so we can have a more explicit semantics around component configs

/cc @chuckha @detiber

areux kinfeature lifecyclfrozen prioritbacklog

Most helpful comment

Here's a user story for you in case it helps motivate folks. The need to edit kube-proxy to do metrics binding to 0.0.0.0 instead of 127.0.0.1 so that prometheus can get metrics from kube-proxy on cluster-api provisioned clusters.

All 19 comments

I'm torn on this, if we just pass through without any type of validation, then we risk hard to debug errors.

That said, if we do add basic validation we still wouldn't have all the validation that is done on the backend and could still end up with hard to debug errors...

The question then begins just how easy do we want to make it for users to shoot themselves in their own foot?

/priority important-longterm
/milestone Next

I'm ok with big foot-guns for now. Especially given the work to surface kubeadm init logs into the machine. This should mitigate the hard-to-debug validation concerns. But until then, I don't see a problem with adding this functionality as outlined by @fabriziopandini.

I'm a little wary on the bool signaling a ComponentConfig...I think the original issue is outlining two possible approaches?

If so, I'm in favor of ComponentConfigs and having them be appended to the kubeadm config.

/milestone v0.3.0
/priority important-longterm

@fabriziopandini do you see this as a hard requirement for v1a3?

@ncdc As far as I know the current API for CABPK is enough for most of the use-cases, so IMO this is a backlog and can be moved to the next milestone.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

/area ux

Is there a documented workaround for this? For example, if I want to specify a custom KubeProxyConfiguration?

We are looking for this feature to enable ipvs mode in kubeProxyConfiguration

Workaround:
adding it to prekubeadmcommands
- /tmp/generate-kube-proxy.sh
creating a file with script

    files:
      - path: /tmp/generate-kube-proxy.sh
        permissions: "0700"
        owner: root:root
        content: |
          #!/bin/bash
          for i in $(ls /tmp | grep kubeadm); do
              cat <<EOF>> /tmp/$i
          ---
          kind: KubeProxyConfiguration
          apiVersion: kubeproxy.config.k8s.io/v1alpha1
          mode: ipvs
          EOF
          done

Trying to understand how onerous it would be to add KubeletConfiguration and KubeProxyConfiguration to KubeadmConfig -- the first possible approach noted by @fabriziopandini.

Looking at the code, it appears that kubeadm itself does no validation on either KubeletConfiguration or KubeProxyConfiguration. kubeadm has the framework for component specific validation, but currently, that does nothing:
https://github.com/kubernetes/kubernetes/blob/a054010d032b301e495d1a421f53b9a37a0a0109/cmd/kubeadm/app/componentconfigs/configset.go#L244

What kubeadm does provide is some default values for certain fields if none are specified. Here is the one for kube-proxy:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/componentconfigs/kubeproxy.go#L86

Beyond KubeletConfiguration and KubeProxyConfiguration, I don't see any other xyzConfiguration, as all other K core components have their fields split across the init/join/cluster configurations.

CABPK could take a similar approach to kubeadm -- treat these two configurations as just pass-through w/o validation. We would document this of course and possibly provide some troubleshooting steps in case there are failures (e.g. go look at the cloud-config logs on machine-x).

In any case, it would be great to prioritize this as people who are building more complex clusters will definitely want to tweak kube-proxy and kubelet settings.

Here's a user story for you in case it helps motivate folks. The need to edit kube-proxy to do metrics binding to 0.0.0.0 instead of 127.0.0.1 so that prometheus can get metrics from kube-proxy on cluster-api provisioned clusters.

One other workaround a co-worker came up with for editing the kube-proxy config is to simply update the kube-proxy configmap:

So apply a confgmap named "kube-proxy" into the kube-system namespace:

data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.96.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: "0.0.0.0:10249"
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
<etc....>

Is there a major downside to just editing the configmap? If nothing else I hope this at least gives folks another option for configuring kube-proxy in the interim till this is implemented.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

One other workaround a co-worker came up with for editing the kube-proxy config is to simply update the kube-proxy configmap:

So apply a confgmap named "kube-proxy" into the kube-system namespace:

<etc....>

Is there a major downside to just editing the configmap? If nothing else I hope this at least gives folks another option for configuring kube-proxy in the interim till this is implemented.

I guess one downside of editing the configmap for kube-proxy is that it duplicates the configuration of the cluster, for example things like clusterCIDR: 10.96.0.0/16.

I very much think it's still desirable to allow this to be configurable as part of the KubeadmConfig.

I'm also having this use case (where I want to monitor kube-proxy using Prometheus):

Here's a user story for you in case it helps motivate folks. The need to edit kube-proxy to do metrics binding to 0.0.0.0 instead of 127.0.0.1 so that prometheus can get metrics from kube-proxy on cluster-api provisioned clusters.

/lifecycle frozen

Was this page helpful?
0 / 5 - 0 ratings

Related issues

chuckha picture chuckha  路  4Comments

fabriziopandini picture fabriziopandini  路  5Comments

bgoareguer picture bgoareguer  路  5Comments

timothysc picture timothysc  路  6Comments

vincepri picture vincepri  路  5Comments