Kops: Use variables in Terraform output with separate files

Created on 21 Aug 2016  路  21Comments  路  Source: kubernetes/kops

If Terraform output would generate variables.tf as well as kubernetes.tf, pushing things like DNS, ASG sizing, etc. to variables, users would have much more flexibility in how they maintain kops-generated clusters.

In my case, I have 4 environments that I'd like to manage with kops/TF. Rather than keeping 4 copies of kubernetes.tf, it'd be ideal if I could have a single kubernetes module that gets referenced by each environment, passing variables that I can choose to manage myself, reducing a lot of code duplication.

When I want to update something later, I can re-generate the new kubernetes.tf and merge in any new variables without losing scaling changes. Ideally, though, even the "nodes" pieces could be a module on it's own that gets referenced by each IG.

areterraform lifecyclrotten

All 21 comments

Just want to add disk size to the list for this. We have one image that is over 1gb so it's a little difficult to manage on a 20gb disk. I'm going to override the TF output for now but a variables file would greatly help!

@jaygorrell thanks for following up. We are really looking for someone to help us wrangle our TF who is a TF guru. If you are interested please let me know. Based on feedback we need to strive to keep TF support stable in our tool.

How would you propose a design to do this? I can see some interesting edge cases since the source of truth is the cloud and not TF. One of the challenges with having separate sources of truth.

Again really appreciate you feedback, and we want to keep TF support available across our tooling? Any help is really appreciated.

I'm pretty comfortable with TF so I can certainly try to help out -- we're using it to do some pretty fancy things across 5 environments in their own accounts.

I definitely see the concerns with having two sources of truth here, but that's a concern regardless. This proposal in particular only pushes the "hard-coded" values into a variables.tf file to isolate configuration. You may be onto something that it encourages editing in TF though, so perhaps it's better to just expand the configuration in Kops instead.

To give an example, I don't want SSH exposed to the world so I set it up for a specific CIDR -- but because I wanted to allow our team to use kubectl from anywhere, I had to override the SG for that (these currently share the same kops config):

resource "aws_security_group_rule" "https-external-to-master" {
    type = "ingress"
    security_group_id = "${aws_security_group.masters-k8s-<snip>.id}"
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
}

If https_external_to_master_cidr and ssh_external_to_master_cidr were Terraform variables, I could override just one value instead of the whole SG block. That's the goal here -- but yes, if the configs were split in Kops, it solves the same problem. I could still see this as a useful way to provide flexibility for things that haven't quite been added to Kops yet, though.

On a related note, if there are other TF discussions, i'd be happy to take part! I'd love to see it remain a supported option going forward... we probably couldn't use Kops easily without it.

Help is appreciated!!! We will be starting office hours for developer, so file a PR, or swing by and talk about getting started on a PR

I think in general we prefer to expose functionality not only to terraform users, so if something should be a variable in terraform, it should be a variable in kops :-)

Not sure if there are any TF exceptions to that - I'm certainly not opposed to making the TF output more modular with variables, but I don't want TF-only functionality, and I also don't want to go too crazy with variables - I've seen way too many CloudFormation scripts that were just unreadable because of overuse of variables (TF seems better, but not immune)

Definitely, I don't think this should be considered TF-specific functionality -- rather, it's just a good format for TF output to declare things likely to change as variables. One simple example, if the cluster name "k8s.dev.domain.com" were a variable, users could use it to create Cloudwatch or Datadog alarms without duplicating the hard-coding.

As for making them variables in kops as well - that sounds ideal, but many of these already _are_ variables in Kops. That's actually a good starting point - if it's in the cluster or IG configuration, make it a variable in TF.

Oh - you just hit me with a cluestick :-) We should also export variables so they can be used elsewhere. Very open to start doing that ... if you have one in particular to start with (and ideally an example of what the TF should look like) I can wire it up, and then the others should be easy!

Yes, outputs would be great as well! Just keep in mind some people will use this as it's own deployment that gets referenced (needing outputs) and others will drop it into an existing deployment (needing variables).

To give a few examples:

variable "cluster_name" { default = "k8s.dev.mysite.com" }
variable "ig_nodes" { default = "nodes.k8s.dev.mysite.com" }
variable "ig_nodes_disk_size { default = 20 }
variable "ig_nodes_disk_type { default = "gp2" }
variable "ig_nodes_min_size" { default = 3 }
variable "ig_nodes_max_size" { default = 3 }

// current output resources go here

output "cluster_name"  { value = "${var.cluster_name}" }
output "ig_nodes"  { value = "${var.ig_nodes}" }
output "ig_nodes_asg_arn" { value = "${aws_autoscaling_group.nodes-k8s-dev-mysite-com.arn}"}
output "ig_nodes_asg_name" { value = "${aws_autoscaling_group.nodes-k8s-dev-mysite-com.name}"}

We need this as well for testing now. This allows us to validate the testing tf and kinda clean it up more.

All of this should be in a separate file:

output "bastion_security_group_ids" {
  value = ["${aws_security_group.bastion-privateweave-example-com.id}"]
}

output "bastions_role_arn" {
  value = "${aws_iam_role.bastions-privateweave-example-com.arn}"
}

output "bastions_role_name" {
  value = "${aws_iam_role.bastions-privateweave-example-com.name}"
}

output "cluster_name" {
  value = "privateweave.example.com"
}

output "master_security_group_ids" {
  value = ["${aws_security_group.masters-privateweave-example-com.id}"]
}

output "masters_role_arn" {
  value = "${aws_iam_role.masters-privateweave-example-com.arn}"
}

output "masters_role_name" {
  value = "${aws_iam_role.masters-privateweave-example-com.name}"
}

output "node_security_group_ids" {
  value = ["${aws_security_group.nodes-privateweave-example-com.id}"]
}

output "node_subnet_ids" {
  value = ["${aws_subnet.us-test-1a-privateweave-example-com.id}"]
}

output "nodes_role_arn" {
  value = "${aws_iam_role.nodes-privateweave-example-com.arn}"
}

output "nodes_role_name" {
  value = "${aws_iam_role.nodes-privateweave-example-com.name}"
}

output "region" {
  value = "us-test-1"
}

output "vpc_id" {
  value = "${aws_vpc.privateweave-example-com.id}"
}

provider "aws" {
  region = "us-test-1"
}

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

/remove-lifecycle stale

I still think this is an useful thing to have. I think that the only advantage for people to use Terraform instead of the default method is to later extend the Terraform definition with other cloud resources (databases, alarms, external load balancers...). Having the information that comes from the Kops in TF variables would be useful not in order for us to override them (since, as @chrislovecnm said, TF is not the source of truth), but to reference them in other user-created TF resources.

Would be great if someone implemented:

main.tf
variables.tf
outputs.tf

Spent a bunch of time with TF today.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

/remove-lifecycle rotten

(I don't know if keeping this issue fresh is the right thing to do when an issue is still relevant to me, but I still haven't had the time or the knowledge to contribute)

i am using a variable in the output of a module
example:

output "client_code" { value = "var.client_code" }

I am able to use the output in the subsequent modules
client_code = "${module.data-center.client_code}"

this works fine while building the resources. however during destroy it throws an error

Output of datacenter.client_code is empty but no error reported
so it is not a clean destroy

this is happening with all the modules where i exported a variable and used it in the subsequent modules. How do i get around this. any help appreciated

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

the robot seems to be on crack

Was this page helpful?
0 / 5 - 0 ratings

Related issues

chrislovecnm picture chrislovecnm  路  3Comments

argusua picture argusua  路  5Comments

Caskia picture Caskia  路  3Comments

justinsb picture justinsb  路  4Comments

mikejoh picture mikejoh  路  3Comments