Cluster-api: Improve clusterctl's --list-variables output

Created on 29 Jul 2020  路  8Comments  路  Source: kubernetes-sigs/cluster-api

User Story

As a developer/user/operator I would like to see more details when I list the variables of a template yaml using clusterctl so that I know what values will be used and what values I need to specify.

Detailed Description
Currently, This is the output of a clusterctl command with --list-variables:

$ clusterctl config cluster foo -i aws:v0.5.4 --list-variables
Using configuration File="/Users/wfernandes/.cluster-api/clusterctl.yaml"
Fetching File="cluster-template.yaml" Provider="infrastructure-aws" Version="v0.5.4"
Variables:
  - AWS_CONTROL_PLANE_MACHINE_TYPE
  - AWS_NODE_MACHINE_TYPE
  - AWS_REGION
  - AWS_SSH_KEY_NAME
  - CLUSTER_NAME
  - CONTROL_PLANE_MACHINE_COUNT
  - KUBERNETES_VERSION
  - WORKER_MACHINE_COUNT

This is great. But now that we have support for default values ${VAR:=defaultValue} it would be nice to also view the default values that may be specified in the template.

Anything else you would like to add:
_Extra bonus:_ It would also be nice to know which variables I may need to yet specify. That is, out of these variables, AWS_CONTROL_PLANE_MACHINE_TYPE, AWS_NODE_MACHINE_TYPE, AWS_SSH_KEY_NAME I may only have setup AWS_SSH_KEY_NAME in my clusterctl.yaml config.

A suggestion could be simply to display a table format such that the blank/empty values are the ones that may be required.
This way we could gain insight into what default values are being used and also what values are missing that I may still need to configure.

| VARIABLE | VALUE |
|--------------------------------|------------|
| AWS_REGION | us-east1 |
| AWS_CONTROL_PLANE_MACHINE_TYPE | |
| AWS_NODE_MACHINE_TYPE | |
| AWS_SSH_KEY_NAME | my-ssh-key |

/kind feature
/area clusterctl

areclusterctl kinfeature

Most helpful comment

@vincepri Yes, had left it due to some issues. Would like to give it a shot once this week.

All 8 comments

/milestone v0.3.x

/milestone v0.4.0

@wfernandes Hi, I would like to work on this.

/assign

Thanks @prankul88 !
/lifecycle active

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

@prankul88 Are you still interested in tackling this issue?

@vincepri Yes, had left it due to some issues. Would like to give it a shot once this week.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

chaosaffe picture chaosaffe  路  6Comments

wfernandes picture wfernandes  路  5Comments

fabriziopandini picture fabriziopandini  路  3Comments

chuckha picture chuckha  路  4Comments

invidian picture invidian  路  5Comments