Terraform-provider-aws: S3 settings on aws_dms_endpoint conflict with "extra_connection_attributes"

Created on 19 Mar 2019  路  19Comments  路  Source: hashicorp/terraform-provider-aws

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version


v0.11.13

Affected Resource(s)

  • aws_dms_endpoint.s3_settings

Terraform Configuration Files

resource "aws_dms_endpoint" "s3_raw" {
  endpoint_id = "s3-raw"
  engine_name = "s3"
  endpoint_type = "target"
  extra_connection_attributes = "dataFormat=parquet;"
  s3_settings {
    service_access_role_arn = "${aws_iam_role.role.arn}"
    bucket_name = "${var.s3_bucket}"
    bucket_folder = "${var.raw_data_path}/dms"
    compression_type = "GZIP"
  }
}

Expected Behavior

The "extra connection status" in the DMS endpoint should be:

bucketFolder=data/raw/dms;bucketName=MY_BUCKET_NAME;compressionType=GZIP;csvDelimiter=,;csvRowDelimiter=\n;dataFormat=parquet;

Actual Behavior

The "extra connection status" in the DMS endpoint is:

bucketFolder=data/raw/dms;bucketName=MY_BUCKET_NAME;compressionType=GZIP;csvDelimiter=,;csvRowDelimiter=\n;

Notice the lack of dataFormat=parquet

Steps to Reproduce

  1. terraform apply
bug servicdatabasemigrationservice

Most helpful comment

@aeschright is this being actively worked on?

All 19 comments

Maybe something to do with AWS SDK v1.18.4 released yesterday?

S3 Endpoint Settings added support for 1) Migrating to Amazon S3 as a target in Parquet format 2) Encrypting S3 objects after migration with custom KMS Server-Side encryption. Redshift Endpoint Settings added support for encrypting intermediate S3 objects during migration with custom KMS Server-Side encryption.

Not sure if it makes any difference, but I haven't upgraded my AWS SDK, and I installed Terraform about a week ago, so I'm not running this new code.

I'm seeing this issue as well.
I'm using the below version of terraform:
Terraform v0.11.13

  • provider.aws v2.6.0

Updated to v2.8.0 and its still an issue.

The extra connection attributes are not being written https://github.com/terraform-providers/terraform-provider-aws/blob/147db051cf2f79503a2a47ea3ee860a3520a6a84/aws/resource_aws_dms_endpoint.go#L283-L285 to the DMS API or read https://github.com/terraform-providers/terraform-provider-aws/blob/147db051cf2f79503a2a47ea3ee860a3520a6a84/aws/resource_aws_dms_endpoint.go#L566 from the DMS API when the engine type is s3, although confusingly they do get updated https://github.com/terraform-providers/terraform-provider-aws/blob/147db051cf2f79503a2a47ea3ee860a3520a6a84/aws/resource_aws_dms_endpoint.go#L391-L393 for all engine types.
If we consistently handle extra_connection_attributes for all engines types then the diff handling will have to take into account that the value returned by the API - bucketFolder=data/dms;bucketName=bucket_name;compressionType=GZIP;csvDelimiter=,;csvRowDelimiter=\\n;dataFormat=parquet; - includes not just the value(s) specified in the Terraform code - dataFormat=parquet; - but also attributes set in the s3_settings block.

Based on what @ewbankkit mentions, an intermediate workaround is to run an initial terraform apply and then change one of your connection attributes (use some irrelevant parameter such as maxFileSize). A second terraform apply will then update the missing connection attributes.

@aeschright is this being actively worked on?

Are there any new updates on this issue? I am seeing this in Terraform 12 as well.

Terraform: v0.12.9
provider.aws: v2.42

@rory-lamendola @mchudoba Please upvote the pull request if you want it prioritised. It is not a full solution, but it will serve most of the use-cases out there by simply extending the supported s3_settings attributes to cover attributes that would otherwise need to be defined in extra_connection_attributes.

anyone working on this? seem the fix has not passed QA for a while?

folks anyone working on it?

folks anyone working on it?

Right now I am allocated to a different project at my company. So I am not working on this currently.

Decided to use cloud formation plugin

Anyone working on it? We plan to do a workaround to update just extra_connection_properties with the aws cli bash script.

Any update on this issue? Please, fix it.

Hi everybody, on one of my data pipeline I am trying to implement time base partition for s3-target-endpoint as like we have TimeBasedPartitioner in io.confluent.connect.s3.S3SinkConnector

Do we have any related issue going on.

Thanks
Raghu

Yet another user wondering why this issue is being ignored ?
It's been 18 months since first raised, without a single indication of it even being recognised as a bug !?

Also hitting this issue in production, it's resulting in files not being encrypted when they land in the S3 bucket, due to the encryption properties being dropped.

Please fix this!!!

Hi all! :wave: Just wanted to direct you to our public roadmap for this quarter (Nov-Jan) in which this item has been mentioned.

Due to the significant community interest in resolving this issue, we will be looking at merging existing contributions soon.

We appreciate all the contributions and feedback thus far.

Was this page helpful?
0 / 5 - 0 ratings