Terraform v0.12.12
resource "aws_codebuild_project" "kinesis_lambda" {
name = "ads-kinesis-lambda-transformation-codebuild"
build_timeout = "60"
service_role = "${aws_iam_role.codebuild_role.arn}"
artifacts {
type = "CODEPIPELINE"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/standard:1.0"
type = "LINUX_CONTAINER"
privileged_mode = true
}
logs_config {
cloudwatch_logs {
group_name = "${aws_cloudwatch_log_group.kinesis_lambda_codebuild.name}"
stream_name = "codebuild_logs"
}
}
source {
type = "CODEPIPELINE"
buildspec = "${data.template_file.kinesis_lambda_buildspec.rendered}"
}
tags = "${merge(map("Name","Kinesis Lambda CodeBuild Project" ), var.codepipeline_tags, var.custom_tags)}"
}
data.template_file.kinesis_lambda_buildspec
version: 0.2
env:
parameter-store:
LAMBDA_BUCKET: ${ssm_lambda_bucket_name}
phases:
install:
runtime-versions:
nodejs: 10
commands:
- echo "Upgrade to latest awscli and yamllint"
pre_build:
commands:
- echo Validation started on `date`
build:
commands:
- echo Validation completed on `date`
artifacts:
files:
- deploy-kinesis-lambda-transformation.yaml
Error: Provider produced inconsistent final plan
When expanding the plan for
module.kinesis_lambda.aws_codebuild_project.kinesis_lambda to include new
values learned so far during apply, provider "aws" produced an invalid new
value for .source: planned set element
cty.ObjectVal(map[string]cty.Value{"auth":cty.SetValEmpty(cty.Object(map[string]cty.Type{"resource":cty.String,
"type":cty.String})), "buildspec":cty.UnknownVal(cty.String),
"git_clone_depth":cty.NullVal(cty.Number),
"insecure_ssl":cty.NullVal(cty.Bool), "location":cty.StringVal(""),
"report_build_status":cty.NullVal(cty.Bool),
"type":cty.StringVal("CODEPIPELINE")}) does not correlate with any element in
actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Terraform apply should apply completely
Resources fail to apply with the panic error
terraform applyI ran terraform apply on the first run which created the resources. On the second run of terraform apply, I get the same error. I could only get the terraform to run it after I destroyed the resources.
resource "aws_codebuild_project" "codebuild" {
name = "codebuild-project"
build_timeout = "5"
service_role = "${var.ecs_role_arn}"
artifacts {
type = "NO_ARTIFACTS"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/amazonlinux2-x86_64-standard:1.0" // https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
type = "LINUX_CONTAINER"
privileged_mode = true
}
source {
type = "GITHUB_ENTERPRISE"
location = "${var.github_repo}"
report_build_status = true
buildspec = "${data.template_file.buildspec.rendered}"
auth {
type = "OAUTH"
resource = "${aws_codebuild_source_credential.token.arn}"
}
}
vpc_config {
vpc_id = "${var.vpc_id}"
subnets = ["${aws_subnet.subnet_a.id}", "${aws_subnet.subnet_b.id}"]
security_group_ids = ["${aws_security_group.application_inbound.id}"]
}
}
Error: Provider produced inconsistent final plan
When expanding the plan for aws_codebuild_project.codebuild to include new
values learned so far during apply, provider "aws" produced an invalid new
value for .source: planned set element
cty.ObjectVal(map[string]cty.Value{"auth":cty.SetVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"resource":cty.StringVal("RESOURCE_ARN"),
"type":cty.StringVal("OAUTH")})}), "buildspec":cty.UnknownVal(cty.String),
"git_clone_depth":cty.NullVal(cty.Number),
"insecure_ssl":cty.NullVal(cty.Bool),
"location":cty.StringVal("GITHUB_URL"),
"report_build_status":cty.True, "type":cty.StringVal("VARIABLE_NAME")})
does not correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Terraform v0.12.12
macOS Sierra
version 10.12.6
I ran this with TF_LOG=debug and got the resulting errors:
```
2019/12/06 16:04:36 [WARN] Provider "aws" produced an unexpected new value for module.mymodule.module.mymodule.aws_ecs_task_definition.mytask, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .container_definitions: was cty.StringVal("[{"cpu":1024,"essential":true,"image":"309403177233.dkr.ecr.us-west-2.amazonaws.com/mytask","logConfiguration":{"logDriver":"awslogs","options":{"awslogs-group":"mytask-logs","awslogs-region":"us-west-2","awslogs-stream-prefix":"mytask"}},"memory":2048,"name":"mytask","networkMode":"awsvpc","portMappings":[{"containerPort":3000,"hostPort":3000}],"ulimits":[{"hardLimit":65535,"name":"nofile","softLimit":65535}]}]"), but now cty.StringVal("[{"cpu":1024,"environment":[],"essential":true,"image":"309403177233.dkr.ecr.us-west-2.amazonaws.com/mytask","logConfiguration":{"logDriver":"awslogs","options":{"awslogs-group":"mytask-logs","awslogs-region":"us-west-2","awslogs-stream-prefix":"mytask"}},"memory":2048,"mountPoints":[],"name":"mytask","portMappings":[{"containerPort":3000,"hostPort":3000,"protocol":"tcp"}],"ulimits":[{"hardLimit":65535,"name":"nofile","softLimit":65535}],"volumesFrom":[]}]")
2019/12/06 16:04:36 [TRACE] module.mymodule.module.mymodule: eval: *terraform.EvalMaybeTainted
2019/12/06 16:04:36 [WARN] Provider "aws" produced an invalid plan for module.kinesis_lambda.aws_codebuild_project.kinesis_lambda, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .logs_config[0].s3_logs: block count in plan (1) disagrees with count in config (0)
- .cache: block count in plan (1) disagrees with count in config (0)
2019/12/06 16:04:36 [ERROR] module.kinesis_lambda: eval: *terraform.EvalCheckPlannedChange, err: Provider produced inconsistent final plan: When expanding the plan for module.kinesis_lambda.aws_codebuild_project.kinesis_lambda to include new values learned so far during apply, provider "aws" produced an invalid new value for .source: planned set element cty.ObjectVal(map[string]cty.Value{"auth":cty.SetValEmpty(cty.Object(map[string]cty.Type{"resource":cty.String, "type":cty.String})), "buildspec":cty.UnknownVal(cty.String), "git_clone_depth":cty.NullVal(cty.Number), "insecure_ssl":cty.NullVal(cty.Bool), "location":cty.StringVal(""), "report_build_status":cty.NullVal(cty.Bool), "type":cty.StringVal("CODEPIPELINE")}) does not correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own issue tracker.
2019/12/06 16:04:36 [ERROR] module.kinesis_lambda: eval: *terraform.EvalSequence, err: Provider produced inconsistent final plan: When expanding the plan for module.kinesis_lambda.aws_codebuild_project.kinesis_lambda to include new values learned so far during apply, provider "aws" produced an invalid new value for .source: planned set element cty.ObjectVal(map[string]cty.Value{"auth":cty.SetValEmpty(cty.Object(map[string]cty.Type{"resource":cty.String, "type":cty.String})), "buildspec":cty.UnknownVal(cty.String), "git_clone_depth":cty.NullVal(cty.Number), "insecure_ssl":cty.NullVal(cty.Bool), "location":cty.StringVal(""), "report_build_status":cty.NullVal(cty.Bool), "type":cty.StringVal("CODEPIPELINE")}) does not correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own issue tracker.
2019/12/06 16:04:36 [TRACE] [walkApply] Exiting eval tree: module.kinesis_lambda.aws_codebuild_project.kinesis_lambda
2019/12/06 16:04:36 [TRACE] vertex "module.kinesis_lambda.aws_codebuild_project.kinesis_lambda": visit complete```
I'm having really similar situation when working with DynamoDB streams
locals {
ddb_name_users = [
"tf-${var.environment}-users",
"tf-${var.environment}-users-restore",
]
}
resource "aws_dynamodb_table" "users" {
count = 2
name = element(local.ddb_name_users, count.index)
billing_mode = "PAY_PER_REQUEST"
hash_key = "Id"
stream_enabled = true
stream_view_type = "NEW_AND_OLD_IMAGES"
attribute {
name = "Id"
type = "S"
}
attribute {
name = "DbRef"
type = "S"
}
attribute {
name = "UserDiscriminator"
type = "N"
}
attribute {
name = "Email"
type = "S"
}
global_secondary_index {
name = "DbRef"
hash_key = "DbRef"
range_key = "UserDiscriminator"
projection_type = "KEYS_ONLY"
}
global_secondary_index {
name = "Email-UserDiscriminator"
hash_key = "Email"
range_key = "UserDiscriminator"
projection_type = "KEYS_ONLY"
}
}
resource "aws_lambda_event_source_mapping" "user_streamer_binding" {
event_source_arn = aws_dynamodb_table.users[0].stream_arn
function_name = "arn:aws:lambda:${var.region}:${var.aws_account_id}:function:${var.environment}-spotmail-users-streamer"
starting_position = "LATEST"
batch_size = var.users_streamer_batch_size
enabled = true
}
And Im getting error:
Error: Provider produced inconsistent final plan
When expanding the plan for aws_lambda_event_source_mapping.user_streamer_binding to include new values learned so far during apply, provider "aws" produced an invalid new value for .event_source_arn: was cty.StringVal(""), but now cty.StringVal("arn:aws:dynamodb:eu-west-1:394990067255:table/tf-stagingaws-users/stream/2019-12-09T06:21:24.347").
This is a bug in the provider, which should be reported in the provider's own issue tracker.
@martyna-autumn - I'm seeing the same issue as you with dynamodb streams and lambda mapping. Were you able to find a workaround?
To elaborate on what @markdfisher said, we're running into the same issue @martyna-autumn ran into, but I'm confused as to why TF is even showing this error.
Here's the HCL:
resource "aws_lambda_event_source_mapping" "foo" {
event_source_arn = aws_dynamodb_table.vault-db.stream_arn
function_name = module.dynamodb-stream-lambda.arn
starting_position = "LATEST"
}
The docs suggest a custom ComputedIf function is needed here:
ComputedIf returns a CustomizeDiffFunc that sets the given key's new value as computed if the given condition function returns true.
But aws_dynamodb_table.vault-db.stream_arn is a Computed: true attribute. Why would a provider author need to suss out and mark attributes marked as Computed? Shouldn't Terraform know this is a computed attribute already?
Further curious is this part of the error:
was cty.StringVal(""), but now cty.StringVal("arn:aws:dynamodb:eu-west-1:394990067255:table/tf-stagingaws-users/stream/2019-12-09T06:21:24.347")
The event_source_arn attribute is required. In this particular case, I feel like this attribute being "" in plan, but an actual value in apply is completely expected?
Hi! If anyone else is facing the same issue with dynamodb streams and lambda mapping, I was able to get it working after upgrading to AWS provider 1.56.0.
The fix for this has been merged and will release with v2.64.0 of the Terraform AWS Provider, expected in this week's release.
This has been released in version 2.64.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
I'm having really similar situation when working with DynamoDB streams
And Im getting error: