_This issue was originally opened by @coryodaniel as hashicorp/terraform#12948. It was migrated here as part of the provider split. The original body of the issue is below._
aws_lambda_alias uses the previous version when a function gets updated.
Currently I have a set of versions of a function published:
aws lambda list-versions-by-function --function-name my_function
{
"Versions": [
{
"Version": "$LATEST",
"CodeSha256": "+r160+gKUVRWTAZJHWhBcqG+1u0H1W1YC0OG+dFmn/Q=",
"FunctionName": "my_function",
"MemorySize": 128,
"CodeSize": 602736,
"FunctionArn": "...:$LATEST",
"Handler": "index.handler",
"Role": "",
"Timeout": 5,
"LastModified": "2017-03-21T23:30:56.648+0000",
"Runtime": "nodejs4.3"
},
{
"Version": "8",
"CodeSha256": "1iXdp2zPeT1wrM+30C8adhM7NveU+3On/1207H4B0mI=",
"FunctionName": "my_function",
"MemorySize": 128,
"CodeSize": 602699,
"FunctionArn": "...:8",
"Handler": "index.handler",
"Role": "...",
"Timeout": 5,
"LastModified": "2017-03-21T21:24:41.362+0000",
"Runtime": "nodejs4.3"
},
{
"Version": "9",
"CodeSha256": "+r160+gKUVRWTAZJHWhBcqG+1u0H1W1YC0OG+dFmn/Q=",
"FunctionName": "my_function",
"MemorySize": 128,
"CodeSize": 602736,
"FunctionArn": "...:9",
"Handler": "index.handler",
"Role": "...",
"Timeout": 5,
"LastModified": "2017-03-21T23:30:56.180+0000",
"Runtime": "nodejs4.3"
}
]
}
As you can see there are two versions here 8
and 9
. When I run terraform plan with aws_lambda_function
changing (you can see the source_code_hash
change) I get the output:
~ module.lambda_release.aws_lambda_alias.instance
function_version: "8" => "9"
~ module.lambda_release.aws_lambda_function.instance
source_code_hash: "+r160+gKUVRWTAZJHWhBcqG+1u0H1W1YC0OG+dFmn/Q=" => "hDFhqlUUGayLiZU9dPI2LGFRkPYBOXPSPP0Md3EK0oQ="
9
already exists. When this function is uploaded, it will be version 10
...
Terraform v0.9.1
Please list the resources as a list, for example:
resource "aws_lambda_function" "instance" {
function_name = "${var.function_name}"
description = "[${var.stage_name}] ${var.description}"
filename = "${var.zip_path}"
source_code_hash = "${base64sha256(file("${var.zip_path}"))}"
handler = "index.handler"
runtime = "nodejs4.3"
memory_size = 128
timeout = 5
role = "${var.role}"
publish = true
}
resource "aws_lambda_alias" "instance" {
depends_on = ["aws_lambda_function.instance"]
name = "${var.stage_name}"
description = "${var.stage_name}"
function_name = "${aws_lambda_function.instance.arn}"
function_version = "${aws_lambda_function.instance.version}"
}
Is there any update on this?
I can confirm that this issue exists Terraform v0.9.11
Having the same issue with v.0.10.0.
Ditto for v0.10.7---have to apply
a second time after updating the source to cause a new version to be created, both when using the S3 integration and when uploading with the filename
argument. Here's a sample configuration, in case it's helpful:
resource "aws_lambda_function" "rupertsberg" {
filename = "../../dist/lambda.zip"
function_name = "rupertsberg"
description = "Ignota Media corporate homepage."
role = "${ aws_iam_role.lambda_access.arn }"
handler = "app.rupertsberg"
source_code_hash = "${ base64sha256(file("../../dist/lambda.zip")) }"
runtime = "nodejs6.10"
memory_size = 512
publish = true
}
resource "aws_lambda_alias" "current_stage" {
name = "${ upper(var.stage) }"
function_name = "${ aws_lambda_function.rupertsberg.arn }"
function_version = "${ aws_lambda_function.rupertsberg.version }"
}
The first apply
updates the source_code_hash
, as expected...
aws_lambda_function.rupertsberg: Modifying... (ID: rupertsberg)
source_code_hash: "QUt+b5uYPC5cf38plDMIQ2LcWBSkpMcO4ZMeVa96g/E=" => "64GvTDvavWrQsipp6StEiv0FNkJ0Jt4PEPzk0jNtEFI="
aws_lambda_function.rupertsberg: Modifications complete after 1s (ID: rupertsberg)
...but only the second apply
actually changes the version of the aws_lambda_alias
:
aws_lambda_alias.current_stage: Modifying... (ID: arn:aws:lambda:us-east-2:007251314244:function:rupertsberg:PRODUCTION)
function_version: "20" => "21"
aws_lambda_alias.current_stage: Modifications complete after 0s (ID: arn:aws:lambda:us-east-2:007251314244:function:rupertsberg:PRODUCTION)
I believe this issue is also related to this error:
If you try to set the description
of the aws_lambda_alias
resource to anything computed (like source_code_hash
from the lambda function or even timestamp()
:
Error applying plan:
1 error(s) occurred:
* module.helloworld_lambda.aws_lambda_alias.lambda_alias: aws_lambda_alias.lambda_alias: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue.
Please include the following information in your report:
Terraform Version: 0.10.6
Resource ID: aws_lambda_alias.lambda_alias
Mismatch reason: extra attributes: function_version
Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"description":*terraform.ResourceAttrDiff{Old:"xdETUTPJGWoV7ZSRKTUL7w6BRVHqMbniRDPebGML0PM=", New:"uBUVNUpA+SNQH01UtLPa49wJmS5D9+Pd5TlGoP9xTYE=", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"function_version":*terraform.ResourceAttrDiff{Old:"14", New:"15", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "description":*terraform.ResourceAttrDiff{Old:"xdETUTPJGWoV7ZSRKTUL7w6BRVHqMbniRDPebGML0PM=", New:"uBUVNUpA+SNQH01UtLPa49wJmS5D9+Pd5TlGoP9xTYE=", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
Also include as much context as you can about your config, state, and the steps you performed to trigger this error.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
As best I can tell (and I'm no expert here for sure), it seems that the function_version
doesn't get properly set for the alias resource during planning when the lambda function is updated, which is why the above fails and why you don't get anything if you are just updating function_version
.
In the meantime, I found that this silliness (computing the version via an intermediate template_file
) somehow forces the alias resource to update during planning and works-around the issue:
# Lambda
resource "aws_lambda_function" "lambda" {
function_name = "${var.stage}_${var.name}"
filename = "${var.file}"
role = "${aws_iam_role.execution_lambda_role.arn}"
handler = "${var.handler}"
memory_size = "${var.memory_size}"
timeout = "${var.timeout}"
publish = "${var.publish}"
source_code_hash = "${base64sha256(file("${var.file}"))}"
runtime = "${var.runtime}"
environment {
variables = "${var.env_variables}"
}
}
data "template_file" "function_version" {
template = "$${function_version}"
vars {
function_version = "${aws_lambda_function.lambda.version}"
}
depends_on = ["aws_lambda_function.lambda"]
}
resource "aws_lambda_alias" "lambda_alias" {
name = "${var.stage}"
function_name = "${aws_lambda_function.lambda.arn}"
function_version = "${data.template_file.function_version.rendered}"
depends_on = ["aws_lambda_function.lambda", "data.template_file.function_version"]
}
I believe I'm seeing a similar crash to @ryandub except in my case it's with aws_cloudfront_distribution
like so:
resource "aws_cloudfront_distribution" "docs" {
# [...]
default_cache_behavior {
# [...]
lambda_function_association {
event_type = "viewer-request"
lambda_arn = "${aws_lambda_function.docs.arn}:${aws_lambda_function.docs.version}"
}
}
}
His fix also works in my case too thankfully 🙏
I can post the diff output if it's useful.
I've seen variations on this problem as well, where the new version is not used by dependent resources, and I'd like to take a stab at fixing it today. I have a pretty good idea of what is causing it and I'll try to create testcase and fix so the workaround with not needed.
Thanks to @mdlavin the fix for this has been merged into master and will be released in v1.10.0 of the AWS provider, likely later today or Monday. 🎉
This has been released in version 1.10.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
Having the same issue with v.0.10.0.