Terraform: Support SSH bastion host for MySQL and PostgreSQL providers

Created on 21 Jan 2016  ยท  7Comments  ยท  Source: hashicorp/terraform

I'd like to use Terraform's PostgreSQL provider to provision some databases on an AWS RDS instance in a private subnet (with Terraform running on a host outside of my VPC). It doesn't seem like this is possible now, but I'd love to see support for something along the lines of:

provider "postgresql" {
  host = "postgres_server_ip"
  port = 5432
  username = "postgres_user"
  password = "postgres_password"

  connection {
    bastion_host = "${aws_db_instance.admin.public_ip}"
    bastion_user = "ec2-user"
    bastion_private_key = "${file("aws-ssh-key")}"
  }
}
core enhancement thinking

Most helpful comment

I really like the idea of a generalized and reusable ssh_tunnel resource or "temporary helper thing".

All 7 comments

Interesting idea! I can definitely see how this would be useful.

Because the Postgres / MySQL providers expect to be able to talk directly via TCP to a database host, this would mean that instead of the straight "ssh hop" used by the bastion_* fields in provisioners now, we'd need to do something more along the lines of the behavior of:

ssh -R 5432:postgres_server_ip:5432 bastion_host

And have the provider flop out the bastion_host address for its host attribute.

I wonder if this could be modeled as a resource...

resource "ssh_tunnel" "postgres" {
  host        = "${var.bastion_host_ip}"
  port        = 15432
  remote_host = "${var.postgres_server_ip}"
  remote_port = 5432
}

provider "postgresql" {
  host = "${ssh_tunnel.postgres.host}"
  port = ${ssh_tunnel.postgres.port}"
  # ...
}

Though that would be tricky since the "resource" is ephemeral if we do it in-process, or we'd have to figure out some way of managing separate port-forwarding processes.

Anyways, lots of ways to slice this one - tagging with "thinking" - thanks again for the feature request!

Yeah, running Terraform outside of the VPC with SSH as the only entry point is tricky for a number of reasons, and this is one of them.

In my world we eventually decided to work around this in a different way: rather than having Terraform access the VPC from outside over SSH, we have an EC2 instance inside the VPC whose entire job is to have Terraform run on it, and then our surrounding orchestration logs into the machines (proxying through the bastion) to run Terraform. This machine is a really bare-bones machine that just has sshd running and the Terraform binaries installed in the PATH. It's a little weird, but it does solve completely-generally all of this class of problem of accessing things that are on the private network.


This problem applies to essentially everything except the top-level IaaS providers in Terraform, so if we could find a way to generalize the SSH tunnel solution like @phinze described then that would be awesome. The idea of a kind of thing that only lives for a single Terraform run, like the ssh_tunnel example, reminds me of the prototyping I did in #2789... though after the thinking I did that led to #4169 I can't help but wonder if this "temporary helper thing" ought to be something distinct from a the resource idea so that its lifecycle can be different in a way that is easier to explain to users.

I really like the idea of a generalized and reusable ssh_tunnel resource or "temporary helper thing".

I've bumped into this a number of times (usually with Consul) and have always thought maybe this would be nested inside of a provider's config. In my limited context here I think that our current bastion connect function could re-used, and just wrapped in another layer that creates a new local net.Listener for the local connection. The bigger parts (at least in my mind) would be making the interpolations possible in the provider config, and handling the creation/tear-down of the tunnel. I could see the setup/tear-down being expensive too, so it would be nice if we could re-use that connection across calls in to the provider.

Just spitballin' here:

provider "consul" {
  # Create the tunnel so we can talk to Consul in our VPC
  tunnel {
    bastion_host = "tunnel.hashicorp.com"
    bastion_port = "22"
    bastion_user = "foo"
    bastion_password = "bar"
    destination_host = "consul.service.consul"
    destination_port = "80"
  }

  # Set up the Consul params using interpolation to ref the tunnel
  address = "${tunnel.host}:${tunnel.port}"
  datacenter = "dc1"
}

Any progress on this? I've run into an issue where I want to bootstrap an entire stack from outside the VPC and I can't initialize basic DB resources on an RDS instance because there is no way to connect to it from outside. The bastion connection stuff does not seem to work.

Sorry for the silence here, everyone. I'm going to close this one to consolidate into #8367, as part of our effort to get the issue backlog back in a manageable state.

I would still love to solve this problem, and I think the proposal in #8367 is a good idea of a general path to take.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings