Cluster-api: Configure Cluster API manager/controllers with a ConfigMap

Created on 11 Nov 2019  路  13Comments  路  Source: kubernetes-sigs/cluster-api

User Story

As a developer and a user I find it annoying to have to modify the YAML to change parameters on my controller. I would like to have a config map with the configuration options all explicitly set to the default values so that I can tweak them, restart my controller and see the effect take place and not have to modify the YAML of the deployment directly.

Detailed Description

For example, sometimes I want to see my controller reconcile something with more verbose logging. It would be nice to change a config map and kick the pod rather than modifying the yaml of the deployment and remembering the command line arguments.

Anything else you would like to add:

This is an idea I swiped from doing a bit of research into knative, but I really like the experience of this over modifying a deployment's YAML directly even if it requires a kubectl delete pod call to reload the pod.

I'm only asking for the fields that are currently configurable via the command line to be exposed in this config map and not every possible configuration option. The other side benefit is that we have an easy place to expose configurability in the future if users end up requesting more control over manager configuration.

/kind feature

areapi areux kinfeature lifecyclfrozen prioritbacklog

Most helpful comment

If we do this, it needs to be a versioned type.

All 13 comments

If we do this, it needs to be a versioned type.

Related to https://github.com/kubernetes-sigs/cluster-api/issues/1767. Not exactly a dupe, but it's using ConfigMap resources to provide defaults to various parts of CAPI.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/lifecycle frozen

/area api
/area ux

This needs a proposal

Two considerations:

  1. As of today with clusterctl init/clusterctl upgrades usage of controller flags is complicated; this change could help to solve at least the upgrade part; not sure if it can help for init as well
  2. A major point of attention for the usage of component config (this is a component config) is the lack of a proper solution for managing upgrades of the config version. However, In the context of Cluster API, I think we can assume that each controller should take care of upgrading their own component config when required (as part of the startup process).

BTW, if when I get the condition work started this work is still pending I will be happy to help

@vincepri what about closing this issues?
clusterctl now support envsubst, and a further enhancement on how to manage controller flags is in the scope of the Management cluster operator initiative...

Let's keep it open, this conversation might be useful if/when controller runtime upstream adds support for ComponentConfig

/close

Closing this in favor of the management cluster operator #3427 + ComponentConfig support in controller-runtime v0.7

@vincepri: Closing this issue.

In response to this:

/close

Closing this in favor of the management cluster operator #3427 + ComponentConfig support in controller-runtime v0.7

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings