In some cases, it's necessary to specify the batching window parameter, e.g. to accumulate batches of events. AWS CLI supports that (--maximum-batching-window-in-seconds parameter) so Terraform should support that too.
I haven't yet fiddled with Terraform code before but I'd like to implement it.
Thank you guys for your continued work.
aws_lambda_event_source_mappingFor Kinesis, but would be similar for SQS or DynamoDB streams.
resource "aws_lambda_event_source_mapping" "kinesis_to_lambda" {
event_source_arn = aws_kinesis_stream.kinesis_stream.arn
function_name = aws_lambda_function.kinesis_handler.arn
batch_size = 100
maximum_batching_window = 5
}
For anyone wondering about a workaround in the meantime:
aws lambda update-event-source-mapping --uuid 1cd76e1c-221f-4f9d-a64b-31bb5e0bee6f --maximum-batching-window-in-seconds 10
As Terraform isn't aware of the attribute, it won't prompt any changes on the next plan, but _will_ prompt a change once the provider supports the attribute.
Note it has to be done this way around, as event source mappings are immutable when looking at them in the console (no edits allowed), and if you try to manually create and import into TF, it will try to destroy/recreate regardless as (as noted in TF docs) "AWS does not expose startingPosition information for existing Lambda event source mappings".
Came across this Pull Request
Support for the new maximum_batching_window_in_seconds argument in the aws_lambda_event_source_mapping resource has been merged and will release with version 2.39.0 of the Terraform AWS Provider, likely tomorrow. Thanks to @tiny-dancer for the implementation. 👍
This has been released in version 2.39.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
Came across this Pull Request