Terraform-provider-aws: aws_dms_replication_task replication_task_settings always reports as needing modification

Created on 27 Aug 2017  路  26Comments  路  Source: hashicorp/terraform-provider-aws

Terraform Version

Terraform v0.10.2

Affected Resource(s)

  • aws_dms_replication_task

Terraform Configuration Files

resource "aws_dms_replication_task" "main" {
  migration_type            = "${var.migration_type}"
  replication_instance_arn  = "${var.replication_instance_arn}"
  replication_task_id       = "${var.database_name}"
  replication_task_settings = "${trimspace(file("settings/settings.json"))}"
  source_endpoint_arn       = "${aws_dms_endpoint.source.endpoint_arn}"
  table_mappings            = "${var.table_mappings}"
  target_endpoint_arn       = "${var.endpoint_arn}"
}

Expected Behavior

After running terraform apply, a subsequent run of terraform plan should indicate no changes needed.

Actual Behavior

The plan continually requires modifications as seen bellow. It appears that the settings from my terraform project are compared against the full set of properties with default values pulled from AWS. Ideally, you should be able to set only the properties that differ from the default.

~ aws_dms_replication_task.main
      replication_task_settings: "{\"TargetMetadata\":{\"TargetSchema\":\"\",\"SupportLobs\":true,\"FullLobMode\":false,\"LobChunkSize\":64,\"LimitedSizeLobMode\":true,\"LobMaxSize\":32,\"LoadMaxFileSize\":0,\"ParallelLoadThreads\":0,\"ParallelLoadBufferSize\":0,\"BatchApplyEnabled\":true},\"FullLoadSettings\":{\"TargetTablePrepMode\":\"DROP_AND_CREATE\",\"CreatePkAfterFullLoad\":false,\"StopTaskCachedChangesApplied\":false,\"StopTaskCachedChangesNotApplied\":false,\"MaxFullLoadSubTasks\":8,\"TransactionConsistencyTimeout\":600,\"CommitRate\":10000},\"Logging\":{\"EnableLogging\":true,\"LogComponents\":[{\"Id\":\"SOURCE_UNLOAD\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"},{\"Id\":\"TARGET_LOAD\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"},{\"Id\":\"SOURCE_CAPTURE\",\"Severity\":\"LOGGER_SEVERITY_DEFAULT\"},{\"Id\":\"TARGET_APPLY\",\"Severity\":\"LOGGER_SEVERITY_INFO\"},{\"Id\":\"TASK_MANAGER\",\"Severity\":\"LOGGER_SEVERITY_DEBUG\"}],\"CloudWatchLogGroup\":\"dms-tasks-bi-replication\",\"CloudWatchLogStream\":\"dms-task-5G74HSCGSNESXL47RZBDASB7Y4\"},\"ControlTablesSettings\":{\"historyTimeslotInMinutes\":5,\"ControlSchema\":\"\",\"HistoryTimeslotInMinutes\":5,\"HistoryTableEnabled\":false,\"SuspendedTablesTableEnabled\":false,\"StatusTableEnabled\":false},\"StreamBufferSettings\":{\"StreamBufferCount\":3,\"StreamBufferSizeInMB\":8,\"CtrlStreamBufferSizeInMB\":5},\"ChangeProcessingDdlHandlingPolicy\":{\"HandleSourceTableDropped\":false,\"HandleSourceTableTruncated\":false,\"HandleSourceTableAltered\":true},\"ErrorBehavior\":{\"DataErrorPolicy\":\"LOG_ERROR\",\"DataTruncationErrorPolicy\":\"LOG_ERROR\",\"DataErrorEscalationPolicy\":\"SUSPEND_TABLE\",\"DataErrorEscalationCount\":50,\"TableErrorPolicy\":\"SUSPEND_TABLE\",\"TableErrorEscalationPolicy\":\"STOP_TASK\",\"TableErrorEscalationCount\":50,\"RecoverableErrorCount\":0,\"RecoverableErrorInterval\":5,\"RecoverableErrorThrottling\":true,\"RecoverableErrorThrottlingMax\":1800,\"ApplyErrorDeletePolicy\":\"IGNORE_RECORD\",\"ApplyErrorInsertPolicy\":\"LOG_ERROR\",\"ApplyErrorUpdatePolicy\":\"LOG_ERROR\",\"ApplyErrorEscalationPolicy\":\"LOG_ERROR\",\"ApplyErrorEscalationCount\":0,\"ApplyErrorFailOnTruncationDdl\":false,\"FullLoadIgnoreConflicts\":true,\"FailOnTransactionConsistencyBreached\":false},\"ChangeProcessingTuning\":{\"BatchApplyPreserveTransaction\":true,\"BatchApplyTimeoutMin\":1,\"BatchApplyTimeoutMax\":30,\"BatchApplyMemoryLimit\":500,\"BatchSplitSize\":0,\"MinTransactionSize\":1000,\"CommitTimeout\":1,\"MemoryLimitTotal\":1024,\"MemoryKeepTime\":60,\"StatementCacheSize\":50}}" => "{\n  \"TargetMetadata\": {\n    \"SupportLobs\": true,\n    \"FullLobMode\": false,\n    \"LobChunkSize\": 64,\n    \"LimitedSizeLobMode\": true,\n    \"LobMaxSize\": 32,\n    \"BatchApplyEnabled\": true\n  },\n  \"FullLoadSettings\": {\n    \"TargetTablePrepMode\": \"DROP_AND_CREATE\",\n    \"MaxFullLoadSubTasks\": 8,\n    \"CommitRate\": 10000\n  },\n  \"ChangeProcessingTuning\": {\n    \"BatchApplyPreserveTransaction\": true,\n    \"BatchSplitSize\": 0\n  },\n  \"Logging\": {\n    \"EnableLogging\": true,\n    \"LogComponents\": [{\n        \"Id\": \"SOURCE_UNLOAD\",\n        \"Severity\": \"LOGGER_SEVERITY_DEFAULT\"\n      },\n      {\n        \"Id\": \"SOURCE_CAPTURE\",\n        \"Severity\": \"LOGGER_SEVERITY_DEFAULT\"\n      },\n      {\n        \"Id\": \"TARGET_LOAD\",\n        \"Severity\": \"LOGGER_SEVERITY_DEFAULT\"\n      },\n      {\n        \"Id\": \"TARGET_APPLY\",\n        \"Severity\": \"LOGGER_SEVERITY_INFO\"\n      },\n      {\n        \"Id\": \"TASK_MANAGER\",\n        \"Severity\": \"LOGGER_SEVERITY_DEBUG\"\n      }\n    ]\n  },\n  \"ChangeProcessingDdlHandlingPolicy\": {\n    \"HandleSourceTableDropped\": false,\n    \"HandleSourceTableTruncated\": false,\n    \"HandleSourceTableAltered\": true\n  },\n  \"ErrorBehavior\": {\n    \"DataErrorPolicy\": \"LOG_ERROR\",\n    \"DataTruncationErrorPolicy\": \"LOG_ERROR\",\n    \"DataErrorEscalationPolicy\": \"SUSPEND_TABLE\",\n    \"DataErrorEscalationCount\": 50,\n    \"TableErrorPolicy\": \"SUSPEND_TABLE\",\n    \"TableErrorEscalationPolicy\": \"STOP_TASK\",\n    \"TableErrorEscalationCount\": 50,\n    \"RecoverableErrorCount\": 0,\n    \"RecoverableErrorInterval\": 5,\n    \"RecoverableErrorThrottling\": true,\n    \"RecoverableErrorThrottlingMax\": 1800,\n    \"ApplyErrorDeletePolicy\": \"IGNORE_RECORD\",\n    \"ApplyErrorInsertPolicy\": \"LOG_ERROR\",\n    \"ApplyErrorUpdatePolicy\": \"LOG_ERROR\",\n    \"ApplyErrorEscalationPolicy\": \"LOG_ERROR\",\n    \"ApplyErrorEscalationCount\": 0,\n    \"FullLoadIgnoreConflicts\": true\n  }\n}"

Steps to Reproduce

  1. terraform apply
  2. terraform plan

Important Factoids

N/A

References

N/A

bug servicdatabasemigrationservice

Most helpful comment

Hi folks, I've opened up a PR with a fix for this issue here: https://github.com/terraform-providers/terraform-provider-aws/pull/13476

Thanks to @nijave and @dariusjs for the idea, though I ultimately ended up going in a slightly different direction.

Please add a reaction so we can get this merged in.

All 26 comments

Similar (likely the same) issue in 0.10.7

This causes an issue even if you omit replication_task_settings. If you omit the replication_task_settings and run apply multiple times you'll get an error on subsequent applications:

'InvalidParameterCombinationException: No modifications were requested on the task'

It seems that the tfstate file contains all the settings' values and sees that as differing from "" (nothing), but when it attempts to modify the task it (rightly) notices there's no real modification.

The real problem is terraform missing CloudWatchLogGroup and CloudWatchLogStream in task settings.

If you disable EnableLogging, the problem disappeared.

If you enable EnableLogging, problem is terraform can't control cloudwath log settings, not sure whether it's the limit of aws api. After first task creation, you can get the cloudwatch log settings from AWS console, fill in them in terraform task_settings, then the state is synced.

If it's aws api's restrict, I think terraform should ignore CloudWatchLogGroup and CloudWatchLogStream in task_settings

@monsterxx03 That's what I did until I realized I could simply use "ignore_changes", which acts as an adequate workaround for the time being.

@sforcier I'm trying to do the same now, but I was wondering about the correct syntax

I have this within my aws_dms_replication_task

lifecycle { ignore_changes = ["CloudWatchLogGroup", "CloudWatchLogStream"] }

Did it resolve your problem? I'm still getting the same error...

@donalddewulf You have to ignore entire arguments, e.g.

  lifecycle {
      ignore_changes = ["replication_task_settings"]
  }

I'm not sure of how to ignore specific changes within the text value of the replication_task_setting, it may not be possible.

@sforcier Yes that makes sense, thanks!

Has anyone found a workaround for this?

I ran into the same problem.

resource "aws_dms_replication_task" "campaigndb" {
    count          = "${terraform.workspace == "production" ? "1" : "0"}"
    migration_type = "full-load-and-cdc"

    replication_instance_arn  = "${aws_dms_replication_instance.campaigndb.replication_instance_arn}"
    replication_task_id       = "campaigndb-replication-task"
    replication_task_settings = "${data.template_file.campaigndb_settings.rendered}"
    source_endpoint_arn       = "${aws_dms_endpoint.source.endpoint_arn}"
    table_mappings            = "${data.template_file.campaigndb_mappings.rendered}"
    target_endpoint_arn       = "${aws_dms_endpoint.target.endpoint_arn}"

    tags {
        Name = "campaigndb replication"
    }
}

Everytime I run terraform plan/apply the settings are reapplied:

Terraform will perform the following actions:

  ~ aws_dms_replication_task.campaigndb
      replication_task_settings: <removed>

Although there was no change at all. And yes, I have configured "EnableLogging " :true.
Not to use logging is not an option.

Terraform v0.11.3
+ provider.aws v1.14.1

Although ignore_changes is a (dirty) work-around, I'd love to see some internal filtering of the settings dict before Terraform tries to update those.

Therefore +1

Stil present in
Terraform v0.11.7

  • provider.aws v1.15.0

Facing the same issue with aws_dms_replication_task

still present in
terraform 0.11.7
in useage with provider aws 1.27.0 and provider.template 1.0.0 on mac os

the 'workaround' works fine for me (until i need to change the task settings ^^)
lifecycle { ignore_changes = ["replication_task_settings"}

HI is there any update on this?? I still seem to get the destroy issue as well, even with:

lifecycle {
ignore_changes = ["replication_task_settings"]

}

The error actually says the 'id' (forces new resource).

The id is is made of of a local var and suffix which dont change?

replication_task_id       = "${local.dms_task_name}-task"

Im on v0.11.11 as well, v1.59 of AWS provider. Also both the table mappings and settings are from template files. These dont change when terraform complains that the 'id' has changed forcing the resource to be recreated.

  dms_task_table_mappings            = "${data.template_file.dms_task_table_mappings.rendered}"
  dms_task_replication_task_settings = "${data.template_file.dms_task_table_task_settings.rendered}"

Agreed logging must be enabled, thats not an option

Anyone? Sort of a urgent, my resource always want to recreate itself :(

I just ran into this too, so let me recap concisely. Today, Terraform doesn't support JSON key comparison, just string. (Please correct me if I'm wrong and I'll put a PR up to fix this issue.)

CloudWatchLogGroup, and CloudWatchLogStream are two values that AWS returns inside the json blob. These two settings are not user configurable (not allowed on creation and no modification allowed).

This "setting" is not a Terraform resource argument, just key value in the json blob that Terraform passes to the API.

So knowing that in the middle of your json blob are two strings, that on creation can't be there, and after creation (when logging is enabled), have to be there, you could get fancy with conditional interpolation (null is an accepted value to create), but the names are also dynamic so you'd need to interpolate from the local AWS command parsing the json.

My work around: On creation, use null for both their values, resources are successfully created. Next, use the aws cli to pull down the live json and replace your json in code. These values don't change overtime, so this should be stable till you have to create the resource again. Should be able to plan now without changes shown.

We use data.local_file for our json blobs, so I pulled the current json down like this:

aws dms describe-replication-tasks --output json > /tmp/dms.json
for i in $(cat /tmp/dms.json | jq -r .ReplicationTasks[].ReplicationTaskIdentifier); do cat /tmp/dms.json | jq -r ".ReplicationTasks[] | select(.ReplicationTaskIdentifier==\"$i\") | .ReplicationTaskSettings" > $i.json; done

@apparentlymart An chance the next minor (semantic version, major for TF) release will support JSON key value comparison / formated diffs so it's legible and we don't have to worry about keys moving place in the blob?

FYI - Ive noticed not to use any capitals in DMS related resource names. They get all made lowercase and then TF thinks its always changed. Annoying - but its a DMS thing, so make sure resources are lowercase!

Terraform v. 0.11.11

data "template_file" "static_test_mapping" {
  template = "${file("${path.module}/static_task.tpl")}"
}

data "template_file" "static_test_settings" {
  template = "${file("${path.module}/task_settings.tpl")}"
}

resource "aws_dms_replication_task" "static_test" {
  replication_task_id       = "static-test"
  migration_type            = "full-load"

  replication_instance_arn  = "<>"
  source_endpoint_arn       = "<>"
  target_endpoint_arn       = "<>"

  table_mappings            = "${replace(data.template_file.static_test_mapping.rendered,"\\s","")}"
  replication_task_settings = "${replace(data.template_file.static_test_settings.rendered,"\\s","")}"

  tags = {
    Name = "static-test"
  }
}

Successfully applied, subsequent plans/applies do not detect any changes.

,"\s","")

This is not a solution for me. I still have the same problem.

Has anyone found a workaround for this with TFv0.12.6? Even "lifecycle.ignore_changes" settings are not working.

  replication_task_settings = "${data.template_file.task-settings.rendered}"

  lifecycle {
      ignore_changes = [
      replication_task_settings,
     ]
  }

Docs -
https://www.terraform.io/docs/configuration/resources.html#ignore_changes

@vivekyad4v For us, the ignore_changes you proposed are working (since TF 0.11.x actually). Maybe your problem lies elsewhere?

I just ran into this too, so let me recap concisely. Today, Terraform doesn't support JSON key comparison, just string. (Please correct me if I'm wrong and I'll put a PR up to fix this issue.)

CloudWatchLogGroup, and CloudWatchLogStream are two values that AWS returns inside the json blob. These two settings are not user configurable (not allowed on creation and no modification allowed).

This "setting" is not a Terraform resource argument, just key value in the json blob that Terraform passes to the API.

So knowing that in the middle of your json blob are two strings, that on creation can't be there, and after creation (when logging is enabled), have to be there, you could get fancy with conditional interpolation (null is an accepted value to create), but the names are also dynamic so you'd need to interpolate from the local AWS command parsing the json.

My work around: On creation, use null for both their values, resources are successfully created. Next, use the aws cli to pull down the live json and replace your json in code. These values don't change overtime, so this should be stable till you have to create the resource again. Should be able to plan now without changes shown.

We use data.local_file for our json blobs, so I pulled the current json down like this:

aws dms describe-replication-tasks --output json > /tmp/dms.json
for i in $(cat /tmp/dms.json | jq -r .ReplicationTasks[].ReplicationTaskIdentifier); do cat /tmp/dms.json | jq -r ".ReplicationTasks[] | select(.ReplicationTaskIdentifier==\"$i\") | .ReplicationTaskSettings" > $i.json; done

@apparentlymart An chance the next minor (semantic version, major for TF) release will support JSON key value comparison / formated diffs so it's legible and we don't have to worry about keys moving place in the blob?

I think you can override DiffSuppressFunc (https://www.terraform.io/docs/extend/schemas/schema-behaviors.html#diffsuppressfunc) here https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_dms_replication_task.go#L63 with code that parses the JSON and removes the logging keys if they exist

I've got a patched function that @nijave mentions that I've tested locally against a DMS endpoint I was also having problems with. I'd like to have something like this upstream but I'm still a terraform v11 user so I've built this off the was provider 2.25.x

Ignored changes is not good enough, otherwise how can you ensure terraform keeps pushing new changes.

Its a bit of a reach calling this a Terraform bug as well, maybe unexpected behaviour as AWS is creating the CloudWatch stream and CloudWatch group under the water and AWS does not allow this to be set. In fact when creating a lot of this through Terraform and not the UI you need to also create some related IAM resources for all of this to work properly.

func suppressEquivalentJsonDiffsExcludeFields(k, old, new string, d *schema.ResourceData) bool {
    var dat map[string]interface{}
    if err := json.Unmarshal([]byte(old), &dat); err != nil {
        return false
    }

    loggingOptions := dat["Logging"].(map[string]interface{})
    log.Println("The Normal Logging Options", loggingOptions)

    // Clean Cloudwatch Settings from old Output due to AWS API returning it
    delete(loggingOptions, "CloudWatchLogGroup")
    delete(loggingOptions, "CloudWatchLogStream")
    cleanedJson, err := json.Marshal(dat)
    if err != nil {
        fmt.Println(err.Error())
        return false
    }
    old = string(cleanedJson)

    ob := bytes.NewBufferString("")
    if err := json.Compact(ob, []byte(old)); err != nil {
        return false
    }
    log.Println("The OB", ob)

    nb := bytes.NewBufferString("")
    if err := json.Compact(nb, []byte(new)); err != nil {
        return false
    }
    log.Println("The NB", nb)
    return jsonBytesEqual(ob.Bytes(), nb.Bytes())
}

I've got a patched function that @nijave mentions that I've tested locally against a DMS endpoint I was also having problems with. I'd like to have something like this upstream but I'm still a terraform v11 user so I've built this off the was provider 2.25.x

Ignored changes is not good enough, otherwise how can you ensure terraform keeps pushing new changes.

Its a bit of a reach calling this a Terraform bug as well, maybe unexpected behaviour as AWS is creating the CloudWatch stream and CloudWatch group under the water and AWS does not allow this to be set. In fact when creating a lot of this through Terraform and not the UI you need to also create some related IAM resources for all of this to work properly.

func suppressEquivalentJsonDiffsExcludeFields(k, old, new string, d *schema.ResourceData) bool {
  var dat map[string]interface{}
  if err := json.Unmarshal([]byte(old), &dat); err != nil {
      return false
  }

  loggingOptions := dat["Logging"].(map[string]interface{})
  log.Println("The Normal Logging Options", loggingOptions)

  // Clean Cloudwatch Settings from old Output due to AWS API returning it
  delete(loggingOptions, "CloudWatchLogGroup")
  delete(loggingOptions, "CloudWatchLogStream")
  cleanedJson, err := json.Marshal(dat)
  if err != nil {
      fmt.Println(err.Error())
      return false
  }
  old = string(cleanedJson)

  ob := bytes.NewBufferString("")
  if err := json.Compact(ob, []byte(old)); err != nil {
      return false
  }
  log.Println("The OB", ob)

  nb := bytes.NewBufferString("")
  if err := json.Compact(nb, []byte(new)); err != nil {
      return false
  }
  log.Println("The NB", nb)
  return jsonBytesEqual(ob.Bytes(), nb.Bytes())
}

Was this working? I had something very similar and Terraform 0.11.13 still wanted to update the field even though it showed no diff for the field (just that the field needed changed). It may not be a bug in _Terraform_ but it's definitely a bug in the provider since it's trying to perform bogus changes

Yes this one did work for me as it鈥檚 a matter of removing the two cloudwatch keys. I agree the provider itself has the issue. I wonder whether terraform 11 can ignore maps as in version 12 described here https://github.com/hashicorp/terraform/issues/21857 so I will give that a try a bit later.

We just faced this issue. What I think may be the solution is to allow ignore_changes to accept json path relative to object fields.
Something like this:
lifecycle { ignore_changes = [jsonPath(replication_task_settings, '.Logging.CloudWatchLogGroup')] }
Is this somehow possible already or may be makes sense to implement this feature?

Hi folks, I've opened up a PR with a fix for this issue here: https://github.com/terraform-providers/terraform-provider-aws/pull/13476

Thanks to @nijave and @dariusjs for the idea, though I ultimately ended up going in a slightly different direction.

Please add a reaction so we can get this merged in.

Was this page helpful?
0 / 5 - 0 ratings