Terraform-provider-aws: rds cluster instances created in default vpc

Created on 19 Jan 2018  路  6Comments  路  Source: hashicorp/terraform-provider-aws

Is is possible to create rds cluster instances in another vpc than the default vpc?

Terraform Version

Terraform v0.11.0
+ provider.aws v1.7.0
+ provider.null v1.0.0
+ provider.template v1.0.0

Terraform Configuration Files

resource "aws_rds_cluster" "default" {
  cluster_identifier = "default-db-cluster"
  availability_zones = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
  database_name = "mydb"
  master_username = "mydbuser"
  master_password = "mydbpassword"
  port = 5432
  engine = "aurora-postgresql"
  backup_retention_period = 7
  storage_encrypted = true
  vpc_security_group_ids = ["${var.security_group_id}"]

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_rds_cluster_instance" "default-instance" {
  count = 2
  cluster_identifier = "${aws_rds_cluster.default.cluster_identifier}"
  instance_class = "db.r4.large"
  db_subnet_group_name = "${aws_rds_cluster.default.db_subnet_group_name}"
  publicly_accessible = false
  engine = "aurora-postgresql"

  lifecycle {
    create_before_destroy = true
  }
}

Debug Output

https://gist.github.com/mmjmanders/52fa679ab31780f8e8559ab4c72a0844

Expected Behavior

An RDS cluster (with the instances) should be created in the same VPC as the ${var.security_group_id} security group is in.

Actual Behavior

Nothing is created due to an error.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
enhancement servicrds

Most helpful comment

@bflad's description above is correct and it worked for me. However, being fairly new with Terraform in AWS, it wasn't clear for me to understand. So here's an illustration about the relations:

vpc <-- subnet <-- subnet_group <-- rds --> security_group --> vpc

By default, Terraform RDS will create the subnet_group and related subnets for you. However, if you have something else than default as the security_group's vpc, then the RDS subnets will end up in a different VPC.

I was able to overcome the issue by creating three subnets, each being in a different availability zone. Each subnet belongs to the same subnet_group, and the subnet_group then has the same vpc_id which security_group uses.

All 6 comments

For future travelers, the relevant error from the gist is:

* aws_rds_cluster.experiments: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-XXXXXXXX and the EC2 security group is in vpc-YYYYYYYY
    status code: 400, request id: 6d163ece-1ae5-426c-bfa3-1eea3b20e31c

@mmjmanders I presume you're expecting the RDS instances to be created in a specific VPC's default subnets? Ideally in that case, we _might_ be able to determine the VPC from the given vpc_security_group_ids as an enhancement when db_subnet_group_name is omitted. For now, you should be able to manually specify which VPC the RDS instances are in via the db_subnet_group_name attribute on the aws_rds_cluster resource as subnets are bound to a specific VPC.

@bflad thanks for the tip! This works. However now when I run my configuration the cluster with its instances _is_ created but the terraform apply exits with the following error:

Error: Error applying plan:

1 error(s) occurred:

* module.project.aws_ecs_service.main: 1 error(s) occurred:

* aws_ecs_service.main: ClientException: TaskDefinition is inactive
    status code: 400, request id: 6c26f0f6-ff4f-11e7-b3d1-b926730e69f9

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

The error log is here https://gist.github.com/mmjmanders/d79d3f68fe7595cfb158c6b825e57321

Could this have something to do with the fact that creating these instances takes about 10 minutes each?

I'm doubtful it has anything to do with that at first glance. ECS marks a task definition as INACTIVE when its "deleted"/deregistered: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deregister-task-definition.html

So it seems like something deleted the ECS task definition that Terraform was pointing at (important note: the aws_ecs_task_definition resource will always statically point at the same version of the definition that Terraform created). If you're doing task definition deployments outside of Terraform, then you'll need to get creative likely using the aws_ecs_task_definition data source instead. If you have further questions around ECS management "best practices" around these things, I would suggest maybe checking the archives (I think I've seen this discussed there) or dropping a note in the terraform-tool Google group.

I'm with the same problem but using aws_db_instance instead of aws_rds_cluster.
I can't use the db_subnet_group_name workaround because i'm have only one availability zone and recieave the error:
Error creating DB Subnet Group: DBSubnetGroupDoesNotCoverEnoughAZs: DB Subnet Group doesn't meet availability zone coverage requirement. Please add subnets to cover at least 2 availability zones. Current coverage: 1

Any sugestion about how to solve this?


EDIT:
Forget about that, Amazon do not allow to create a RDS with only one availability zone.

@bflad's description above is correct and it worked for me. However, being fairly new with Terraform in AWS, it wasn't clear for me to understand. So here's an illustration about the relations:

vpc <-- subnet <-- subnet_group <-- rds --> security_group --> vpc

By default, Terraform RDS will create the subnet_group and related subnets for you. However, if you have something else than default as the security_group's vpc, then the RDS subnets will end up in a different VPC.

I was able to overcome the issue by creating three subnets, each being in a different availability zone. Each subnet belongs to the same subnet_group, and the subnet_group then has the same vpc_id which security_group uses.

Issue with the DB Security Group which you created. Below steps help to resolve the issue

Make sure the you the below parameters are available

1)

resource "aws_db_instance" "mysql"

parameter_group_name = "${aws_db_parameter_group.mysql_db_pg.name}"
db_subnet_group_name = "${aws_db_subnet_group.db_subnet_group.name}"
vpc_security_group_ids = ["${aws_security_group.db_security_group.id}"]

2) Important - Create your DB security group and attach required VPC ( if not it will take default vpc)

resource "aws_security_group" "db_security_group" {
name = "${var.db_name}-${var.environment}-db-sg"
description = "Allows access to db"
vpc_id = "${var.vpc_id}"

tags = {
Name = "db_security_group_${var.project}"
}
}

3) Parameter group

resource "aws_db_parameter_group" "mysql_db_pg" {
name = "db-pgroup-${var.environment}-${var.project}"
family = "mysql5.7"

tags = {
Name = "db_para_group_${var.project}"
}

}

4) Subnet Group

resource "aws_db_subnet_group" "db_subnet_group" {
name = "db_sg_${var.environment}_${var.project}"
subnet_ids = "${var.db_subnet_ids}"

tags = {
Name = "db_sub_data_group_${var.project}"
}

Thanks,
SP

Was this page helpful?
0 / 5 - 0 ratings