Is it now possible to use the source_code_hash for files (like a Jar) stored in an S3 bucket. The request #6860 seems to imply this has been fixed, but the documentation still only relates to the use of file_name.
If it has, how do I specify the location as the S3 bucket and key.?
I'm having a similar issue. Is there any update on if this issues has been resolved?
Yes, I think this is just a documentation bug. It should now be possible to use source_code_hash
regardless of which method is used to provide the archive file.
Many thanks will give it a try. Do you know if it will work with Jar files or does it have to be a zip extension?
The requirements for this file vary depending on what runtime is in use. I'm not very familiar with the Java runtime but a quick look at the docs suggests that a JAR file is indeed expected in that case.
The resource doesn't care what the name of the file (or key of the S3 object) is, but the underlying API will validate that the content is valid.
I get the following error message:
1 error(s) occurred:
module.lambda.aws_lambda_function.ShoppingLambdaFunction: 1 error(s) occurred:
module.lambda.aws_lambda_function.ShoppingLambdaFunction: file: open shopping-service-v0.0.2.jar: no such file or directory in:
${base64sha256(file("shopping-service-v0.0.2.jar"))}
The aws_lambda_function is as follows:
resource "aws_lambda_function" "ShoppingLambdaFunction" {
s3_bucket = "test-shopping-application"
s3_key = "shopping-service-v0.0.2.jar"
function_name = "test-shopping-application"
description = "Lambda function for shopping Api"
role = "${var.iam_arn}"
runtime = "java8"
source_code_hash = "${base64sha256(file("shopping-service-v0.0.2.jar"))}"
handler = "test.shopping.ApplicationHandler"
memory_size = "1536"
timeout = "15"
Can you offer any light as to why this is saying it can't see the jar file in the bucket ?
I'm seeing the same results as alepeltier, I just get a file not found error.
HI @alepeltier!
Sorry for the confusion. The file
function reads from local disk, so this is presuming that you have a copy of your .jar
file in a local directory as well as on S3. If not, it would be necessary to download either the file itself or its base64sha256 hash so that Terraform can refer to it. Unfortunately Terraform itself can't directly help with this, but if you have a script or other process outside of Terraform that is uploading the .jar
file to S3 you could potentially alter that process to compute the base64sha256
of the file and put it alongside the file in your S3 bucket, and then retrieve that object using Terraform like this:
data "aws_s3_bucket_object" "jar_hash" {
bucket = "test-shopping-application"
key = "shopping-service-v0.0.2.jar.base64sha256"
}
resource "aws_lambda_function" "ShoppingLambdaFunction" {
s3_bucket = "test-shopping-application"
s3_key = "shopping-service-v0.0.2.jar"
function_name = "test-shopping-application"
description = "Lambda function for shopping Api"
role = "${var.iam_arn}"
runtime = "java8"
source_code_hash = "${data.aws_s3_bucket_object.jar_hash.body))}"
handler = "test.shopping.ApplicationHandler"
memory_size = "1536"
timeout = "15"
}
For this to work, the shopping-service-v0.0.2.jar.base64sha256
object in S3 must be created with the Content-Type text/plain
so that Terraform will know it's safe to retrieve and return it.
Given that this adds extra complexity to the build process, a more common solution I've seen is to simply guarantee that each new version built has a different s3_key
, and then Terraform can easily tell when it needs to update the source code without needing to inspect the content. It looks like you already have a version number in your key path, so perhaps this solution will work for you.
I'm going to re-close this, since it seems like everything is working as expected. I apologize that it's hard to have ongoing question/answer on github issues due to things getting lost among other notifications, so if you have further questions I would recommend asking on the Terraform mailing list or Gitter chat, which are linked from the community page.
i solved it this way:
resource "aws_s3_bucket_object" "build_code" {
bucket = "${aws_s3_bucket.build.id}"
key = "lambda-code.zip.${base64sha256(file("lambda-code.zip"))}"
source = "lambda-code.zip"
etag = "${md5(file("lambda-code.zip"))}"
}
resource "aws_lambda_function" "build" {
depends_on = ["aws_s3_bucket_object.build_code"]
function_name = "build_lambda"
handler = "index.handler"
role = "${aws_iam_role.build.arn}"
runtime = "nodejs6.10"
s3_bucket = "${aws_s3_bucket.build.id}"
s3_key = "lambda-code.zip.${base64sha256(file("lambda-cunde.zip"))}"
}
By the way, hash calculated by sha256sum lambda.zip | base64 -w0
does not match the one from AWS, resulting in terraform changing Lambdas on every run.
Correct hash could be calculated by:
openssl dgst -sha256 -binary lambda.zip | openssl enc -base64
Maybe it's worth adding to the docs?
Getting the same error:
Terraform Version: 0.11.7
Resource ID: aws_lambda_function.commit_event
Mismatch reason: extra attributes: s3_object_version
Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"source_code_hash":*terraform.ResourceAttrDiff{Old:"7Oj/AF827enx9m6+tf+1ExAlJkYkCv/zD7r6GizxcbY=", New:"kh3hFTOWX1uU8K1Q7irfpuyIBrv8TXkZM7o/Q4IYx6A=", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "last_modified":*terraform.ResourceAttrDiff{Old:"2018-05-11T19:54:18.592+0000", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"s3_object_version":*terraform.ResourceAttrDiff{Old:"PLE0ESmVlVjfpjs8kWoJcwBSrqZ9fG5Z", New:"DX0LHLkbtd.tD2bhs9tzTJCyo5p8hYuW", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "source_code_hash":*terraform.ResourceAttrDiff{Old:"7Oj/AF827enx9m6+tf+1ExAlJkYkCv/zD7r6GizxcbY=", New:"kh3hFTOWX1uU8K1Q7irfpuyIBrv8TXkZM7o/Q4IYx6A=", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "last_modified":*terraform.ResourceAttrDiff{Old:"2018-05-11T19:54:18.592+0000", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
Funnily enough, the second time I run apply
it completes fine.
Not sure it is the same error to be honest:
locals {
bucket_key = "${var.environment}/${var.artifact_version}/${var.artifact_name}"
}
# the code will be uploaded to this object in the data bucket
resource "aws_s3_bucket_object" "code_object" {
bucket = "${var.code_bucket}"
key = "${local.bucket_key}"
source = "${local.zip_full_path}"
etag = "${md5(file(local.zip_full_path))}"
tags = "${local.tags}"
}
resource "aws_lambda_function" "commit_event" {
function_name = "${local.full_prefix}-commit-event"
s3_bucket = "${var.code_bucket}"
s3_key = "${aws_s3_bucket_object.code_object.key}"
s3_object_version = "${aws_s3_bucket_object.code_object.version_id}"
...
source_code_hash = "${base64sha256(file(local.zip_full_path))}"
...
Is there going to be any work to address this from Terraform side rather than a user having to create SHAs? That way works, but would rather not do that on our side if it can be supported by Terraform.
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
By the way, hash calculated by
sha256sum lambda.zip | base64 -w0
does not match the one from AWS, resulting in terraform changing Lambdas on every run.Correct hash could be calculated by:
Maybe it's worth adding to the docs?