Aws-cdk: s3: allow defining bucket notifications for unowned buckets

Created on 12 Mar 2019  路  58Comments  路  Source: aws/aws-cdk

The use case is fairly simple, creating a new lambda function and attaching an event source from an existing S3 bucket of object creation.

following example code:

   const exampleBucket = Bucket.import(this, 'example-bucket', {bucketArn: 'arn:aws:s3:::example-bucket-arn'});
   fn.addEventSource(new S3EventSource(exampleBucket, { events: [EventType.ObjectCreated] }));

This code has a type mismatch (ImportedBucket vs Bucket), and doesn't work regardless.
Is there another way to use an existing bucket for an S3 event source?

Additional 2x馃憤 from duplicated issues

@aws-cdaws-s3 @aws-cdaws-s3-notifications efforlarge feature-request in-progress managemenroadmap p1

Most helpful comment

please add possibility to add notification from an existing s3 bucket! that's needed a lot, it's a very common use case!

All 58 comments

I have the same problem too.
It looks like the IBucket interface isn't an extension of the Bucket interface, and doesn't have the onEvent() function.
I found instead the onPutObject() function which is kind of suitable for my intent, but it isn't as general as onEvent().

Is this difference intended? Am I missing something?

Thanks in advance for the clarification.

Due to current limitations with CloudFormation and the way we implemented bucket notifications in the CDK, it is impossible to add bucket notifications on an imported bucket. This is why the event source uses s3.Bucket instead of s3.IBucket.

You could use onPutObject:

const bucket = s3.Bucket.import(this, 'B', {
  bucketName: 'my-bucket'
});

const fn = new lambda.Function(this, 'F', {
  code: lambda.Code.inline('boom'),
  runtime: lambda.Runtime.NodeJS810,
  handler: 'index.handler'
});

bucket.onPutObject('put-object', fn);

We can vend another lambda event source which will be more limited in capabilities and will use putObject. Maybe something like S3PutObjectEventSource

@eladb

You could use onPutObject:

sure I can, but what if I want to bind the lambda to other bucket events?

Below are the events you can choose from the console, and the checked option is my choice.
I cannot do the same thing from the cdk apparently.
Screen Shot 2019-04-04 at 16 45 27

Am I correct?

Sadly no, and neither from CloudFormation.

Oh, I see. Should we point this out to AWS directly then?

Hi, I'm facing the same problem. Is there any news so far or is the issue planned to be fixed soon?

If I'm understanding correctly what @eladb proposes, it implies a different event notification flow:

  • The first one, which is the one that I need, comes from an event source notification and its reflected as an event setting bucket.
  • The approach from @eladb comes from a cloud trail event and its reflected as an object-level logging adding additional costs to the pipeline.

Is there no way to add an event source to a lambda function from an existing bucket using the aws cdk and without using cloud trail?

This is my use case example using serverless framework and what I'm trying to replicate using the cdk:

plugins:
  - serverless-plugin-existing-s3

functions:
  <function-name>:
    handler: handler.main
    events:
      - existingS3:
          bucket: <bucket-name>
          events:
            - s3:ObjectCreated:*
          rules:
            - Suffix: .foo

aws-cdk.aws-s3.Bucket.onPutObject appears to no longer exist.
on_cloud_trail_put_object looks like the closest alternative, but that requires creating a CloudTrail trail to capture the event first.
Is that the best way to do this now?

Bucket.add_event_notification seems to have replaced Bucket.add_event_source.
Updated minimal example that works with a Bucket but not an IBucket below.

my_bucket.add_event_notification(
    aws_s3.EventType.OBJECT_CREATED,
    aws_s3_notifications.LambdaDestination(my_lambda),
    )

@eladb Is there further updates on this, we have many use cases to put notifications on existing buckets. According the below ticket, it says it is implemented in Serverless, any plans to extend this too CDK.

Add support for existing S3 buckets #6290
https://github.com/serverless/serverless/pull/6290

+1 @dandu1008 @eladb

It is already implemented in Serverless Framework. We also have a lot of use cases that consist on adding lambda notifications on existing buckets.

Anyway it seems is not a problem related only with existing buckets, but with existing resources. We've tried the same approach but trying to subscribe a lambda to an existing SNS topic instead of an existing bucket (this is also possible using Serverless Framework) and it ends in the same result. You cannot add it because importing an existing SNS topic (using fromTopicArn()) returns an ITopic and addEventSource() expects only a Topic.

It is important for us, and in my opinion it's not an uncommon use case, please, it would be great to have that implementation.

Any feedback from your side would be pleasantly appreciated to know that this ticket has not been forgotten.

Best Regards

@nebur395 -- looking at the Serverless link you sent, you could approach the problem in a similar method using this experimental package. Obviously experimental, but perhaps your feedback and PR if you find something that is robust and community needed?

Due to current limitations with CloudFormation and the way we implemented bucket notifications in the CDK, it is impossible to add bucket notifications on an imported bucket. This is why the event source uses s3.Bucket instead of s3.IBucket.

You could use onPutObject:

const bucket = s3.Bucket.import(this, 'B', {
  bucketName: 'my-bucket'
});

const fn = new lambda.Function(this, 'F', {
  code: lambda.Code.inline('boom'),
  runtime: lambda.Runtime.NodeJS810,
  handler: 'index.handler'
});

bucket.onPutObject('put-object', fn);

Fundamentally this problem stems from the way the cloudformation team designed the S3 bucket resource. They chose to put the notifications as a property of the bucket itself rather than as a separate resource.

This choice has always baffled me and it results in the inability to do anything with an existing S3 bucket other than set the S3 bucket policy (which is a separate resource in cloudformation).

Until such time as this changes (which is probably never) CDK will not be able to implement notifications on existing buckets without using Custom Resources. This is because CDK is ultimately just producing a cloudformation template. I am hopeful that the CDK team may implement something using built-in custom resources magic behind the scenes that abstracts away the complexity of us having to use Custom Resources directly.

This StackOverflow thread explains the problem well and provides a solution in terms of a lambda function backed custom resource for cloudformation. The next part of the puzzle (for me) is how to implement this in CDK!

@eladb Is this https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/aws-s3/lib/notifications-resource/notifications-resource.ts intended to solve this problem? Are there any examples of using it?

@reidca - No its not, fundamentally CloudFormation only supports two resource types. In order to be able to import a bucket and add event notification targets to it, there would have to be a separate AWS::S3::BucketNotification CloudFormation type.

I'll actually take back my previous comment, I think there could be a way to accomplish this using a lambda function.

Here are some high level thoughts:

  • The stack creating the bucket notification should import the bucket so as to create an import / export relationship between the two stacks (this way it is obvious if someone tries to delete the bucket stack)
  • A lambda function would use the SDK to inspect / add the bucket notification to the original bucket. This lambda function would be similar to other singleton functions already created by the CDK and invoked during CloudFormation deployment as a CustomResource.
  • The lambda function would be invoked by the stack requesting the bucket notification (not the stack which creates the bucket)

The caveat would be if the bucket is in a different account. I'm not sure how the permissions would be sorted out for the lambda trying to modify a bucket in a separate account, but it might be possible. (I don't even know if cross account event notification is possible)

"A lambda function would use the SDK to inspect / add the bucket notification to the original bucket."

Is there a collection of Custom Resources anywhere in the CDK core where something like this could live? You can do pretty much anything with the SDK even if it's not supported by CloudFormation (after all, that's how tools like Terraform work pretty much). It'd be really helpful if functionality like adding bucket notifications for unowned buckets could be handled "natively" by CDK, sort of like...:

import { S3BucketNotification } from "@aws-cdk/custom-resources";

-- snip --

new S3BucketNotification(this, "<name>", {
    bucket: Bucket.fromBucketArn("...") ,
    events: [...],
    filter_rules: [...],
    target: "..."
});

There's nothing preventing anybody from creating a third-party implementation of this, but having an "official" catalog of solutions for cases where CloudFormation design decisions make other approaches impossible would be quite handy.

@eladb

I am not sure why addEventNotification function can not be part of IBucket? As it is just using a lambda to setup the bucket notifications, at that would work for existing buckets

Otherwise i did like the "Serverless" implementation of the lambda handler as it makes sure to keep any of the existing bucket notifications. You can see the implementation here: https://github.com/serverless/serverless/tree/master/lib/plugins/aws/customResources/resources/s3

Also if you destroy the stack it would completely clear out any of the notification configured for that bucket

 if (event.RequestType === 'Delete') {
    props.NotificationConfiguration = { }; // this is how you clean out notifications
  }

Here's the CustomResource CloudFormation part of the Serverless stack's implementation: https://github.com/serverless/serverless/blob/master/lib/plugins/aws/package/compile/events/s3/index.js#L281

Taken together with the link @michaelbrewer shared: https://github.com/serverless/serverless/tree/master/lib/plugins/aws/customResources/resources/s3

Give a complete picture of how Serverless Framework handles this scenario.

I was struggling with this too. Took me a while to figure out the permissions/policies. I ended up making a CDK construct that people can drop-in and call in their stack if it helps anyone.

https://gist.github.com/archisgore/0f098ae1d7d19fddc13d2f5a68f606ab

@archisgore looks like that only handles a very specific scenario - for SQS. As others have shared, attaching to Lambda Functions and SNS is arguably more common.

Interesting and thank you for sharing!

I was struggling with this too. Took me a while to figure out the permissions/policies. I ended up making a CDK construct that people can drop-in and call in their stack if it helps anyone.

https://gist.github.com/archisgore/0f098ae1d7d19fddc13d2f5a68f606ab

@archisgore thanks looks like the above works well for SQS and can easily be modified for others like Lambda. I noticed that in this case if the bucket already has existing notifications, it will destroy those existing ones.

If anybody have insight into this will be great!

please add possibility to add notification from an existing s3 bucket! that's needed a lot, it's a very common use case!

I have the same issue, if it's been already solved for serverless framework I expect CDK team should be able to do the same.

this is what I found and used, you can find my answer for a brand new lambda that listens for an existing s3 bucket

https://stackoverflow.com/questions/58087772/aws-cdk-how-to-add-an-event-notification-to-an-existing-s3-bucket/63062813#63062813

this is what I found and used, you can find my answer for a brand new lambda that listens for an existing s3 bucket

https://stackoverflow.com/questions/58087772/aws-cdk-how-to-add-an-event-notification-to-an-existing-s3-bucket/63062813#63062813

thanks heaps @vzverv .. this has worked for us!

If anyone has a working python workaround please share -- been struggling to get the solutions above to work in my stack on the latest CDK version.

I see I got a few upvotes so I'll post my solution here. Taken from a few different answers on the matching stackoverflow post

#!/usr/bin/env python3

from aws_cdk import (
    core,
    aws_s3,
    aws_lambda,
    aws_iam,
    custom_resources
)

region = 'us-west-2'
bucket_name_to_import = 'my_existing_bucket'

app = core.App()

my_stack = core.Stack(app, "stack")
lambda_function = aws_lambda.Function(
    my_stack,
    "lambdaFunction",
    code=aws_lambda.Code.from_inline("whatever"),
    handler="index.handler",
    runtime=aws_lambda.Runtime.NODEJS_10_X,
)


s3_bucket = aws_s3.Bucket.from_bucket_name(
    my_stack, 'imported-bucket', bucket_name_to_import)


lambda_function.add_permission(
    's3-trigger-lambda-s3-invoke-function',
    principal=aws_iam.ServicePrincipal('s3.amazonaws.com'),
    action='lambda:InvokeFunction',
    source_arn=s3_bucket.bucket_arn)

notification_resource_id = 's3-notification-resource-upload-'+s3_bucket.bucket_name
bucket_notification_config = custom_resources.AwsSdkCall(
    service="S3",
    action="putBucketNotificationConfiguration",
    parameters={
        "Bucket": s3_bucket.bucket_name,
        "NotificationConfiguration": {
            "LambdaFunctionConfigurations": [{
                "Events": ['s3:ObjectCreated:*'],
                "LambdaFunctionArn": lambda_function.function_arn,
                "Filter": {
                    "Key": {
                        "FilterRules": [{'Name': 'prefix', 'Value': 'test/'}]
                    }
                }
            }]
        }
    },
    physical_resource_id=custom_resources.PhysicalResourceId.of(
        notification_resource_id),
    region=region
)
custom_s3_resource = custom_resources.AwsCustomResource(
    my_stack,
    's3-incoming-documents-notification-resource',
    policy=custom_resources.AwsCustomResourcePolicy.from_statements([
        aws_iam.PolicyStatement(
            effect=aws_iam.Effect.ALLOW,
            resources=['*'],
            actions=['s3:PutBucketNotification']
        )
    ]),
    on_create=bucket_notification_config,
    on_update=bucket_notification_config,
    on_delete=custom_resources.AwsSdkCall(
        service="S3",
        action="S3:putBucketNotificationConfiguration",
        physical_resource_id=custom_resources.PhysicalResourceId.of(
            notification_resource_id),
        parameters={
            "Bucket": s3_bucket.bucket_name,
            "NotificationConfiguration": {},
        }
    ))

custom_s3_resource.node.add_dependency(lambda_function)
custom_s3_resource.node.add_dependency(s3_bucket)

app.synth()

the only issue I'm running into with this code is that cdk destroy fails to delete the event on the imported bucket.

We had to create a complete lambda in python that can help the full life cycle.

import sys
import urllib.request
import json
import boto3
import botocore

s3_client = boto3.client('s3')

SUCCESS = "SUCCESS"
FAILED = "FAILED"


def handler(event, context):
    response_data = {}
    try:
        bucket_name = event["ResourceProperties"]["bucket_name"]
        function_name = event["ResourceProperties"]["function_name"]
        if event['RequestType'] == 'Delete':
            remove(bucket_name, function_name)
        elif event['RequestType'] == 'Create' or event['RequestType'] == 'Update':
            update(bucket_name, function_name, event["ResourceProperties"]["function_arn"], event["ResourceProperties"]["prefix"], event["ResourceProperties"]["suffix"])
            response_data = {'bucket': bucket_name, "function_name": function_name}
        response_status = SUCCESS
    except Exception as e:
        print('Failed to process:', e)
        response_status = FAILED
        response_data = {'Failure': 'Something bad happened.'}
    send(event, context, response_status, response_data)


def get_bucket_notification_configuration_filtered(bucket_name, id):
    notification_configuration = s3_client.get_bucket_notification_configuration(Bucket=bucket_name)
    if 'ResponseMetadata' in notification_configuration.keys():
        del notification_configuration['ResponseMetadata']
    if 'LambdaFunctionConfigurations' in notification_configuration.keys():
        list = notification_configuration['LambdaFunctionConfigurations']
        notification_configuration['LambdaFunctionConfigurations'] = [item for item in list if item["Id"] != id]
    else:
        notification_configuration['LambdaFunctionConfigurations'] = []
    return notification_configuration


def remove(bucket_name, function_name):
    notification_configuration = get_bucket_notification_configuration_filtered(bucket_name, function_name)
    s3_client.put_bucket_notification_configuration(Bucket=bucket_name, NotificationConfiguration=notification_configuration)


def update(bucket_name, function_name, function_arn, prefix, suffix):
    notification_configuration = get_bucket_notification_configuration_filtered(bucket_name, function_name)
    notification_configuration['LambdaFunctionConfigurations'].append(
        {
            "Events": ["s3:ObjectCreated:*"],
            "LambdaFunctionArn": function_arn,
            "Filter": {
                "Key": {
                    "FilterRules": [
                        {
                            "Name": "prefix",
                            "Value": prefix
                        },
                        {
                            "Name": "suffix",
                            "Value": suffix
                        }
                    ]
                }
            },
            "Id": function_name
        }
    )
    s3_client.put_bucket_notification_configuration(Bucket=bucket_name, NotificationConfiguration=notification_configuration)


def send(event, context, response_status, response_data, physical_resource_id=None, no_echo=False):
    response_url = event['ResponseURL']
    response_body = json.dumps(
        {
            'Status': response_status,
            'Reason': "See the details in CloudWatch Log Stream: " + context.log_stream_name,
            'PhysicalResourceId': physical_resource_id or context.log_stream_name,
            'StackId': event['StackId'],
            'RequestId': event['RequestId'],
            'LogicalResourceId': event['LogicalResourceId'],
            'NoEcho': no_echo,
            'Data': response_data
        }
    ).encode('utf-8')
    headers = {
        'content-type': '',
        'content-length': len(response_body)
    }
    try:
        req = urllib.request.Request(url=response_url, headers=headers, data=response_body, method='PUT')
        with urllib.request.urlopen(req) as response:
            print(response.read().decode('utf-8'))
        print("Status code: " + response.reason)
    except Exception as e:
        print("send(..) failed executing requests.put(..): " + str(e))

i just need to update "Events": ["s3:ObjectCreated:*"], to be passed in

@alex9311 - maybe i can work on a PR for the lambda i created and include it in CDK.

@michaelbrewer I'm trying to understand how your lambda would be used. Would this be outside of a cdk stack and you'd use it to add notifications separately?

@michaelbrewer I'm trying to understand how your lambda would be used. Would this be outside of a cdk stack and you'd use it to add notifications separately?

We could make a nice little wrapper for this and built it into CDK like:

your_lambda_function.add_event_source(
    aws_lambda_event_sources.S3EventSource(
        your_existing_bucket,
        events=[aws_s3.EventType.OBJECT_CREATED]
    ),
    filters=[aws_s3.NotificationKeyFilter(prefix="prefix/", suffix="suffix)]
)

The CDK code i currently have (but very old)

# Add the function that sets up the S3 bucket notifications
s3_bucket_notification_function_role = aws_iam.Role(
    self,
    "SetupS3NotificationRole",
    assumed_by = aws_iam.ServicePrincipal("lambda.amazonaws.com"),
    managed_policies= [aws_iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AWSLambdaBasicExecutionRole")]
)
s3_bucket_notification_function_role.add_to_policy(
    aws_iam.PolicyStatement(
        actions= ["s3:PutBucketNotification", "s3:GetBucketNotification"],
        resources= [bucket.bucket_arn]
    )
)
s3_bucket_notification_function = core.CfnResource(
    self,
    "SetupS3NotificationFunction",
    type = "AWS::Lambda::Function",
    properties = {
        "Description": 'AWS CloudFormation handler for "Custom::S3BucketNotifications" resources (@aws-cdk/aws-s3)',
        "Code": { "ZipFile": open("cdk-lambdas/s3bucketnotification.py", "r").read() },
        "Handler": "index.handler",
        "Role": s3_bucket_notification_function_role.role_arn,
        "Runtime": "python3.8",
        "Timeout": "300"
    }
)
s3_bucket_notification_function.node.add_dependency(your_lambda_function.role)

# Create a custom resource that actually sets up the bucket notifications
custom_resource = core.CfnResource(
        self,
        'CallSetupS3Notification',
        type = 'Custom::S3BucketNotifications',
        properties= {
            "ServiceToken": s3_bucket_notification_function.get_att("Arn"),
            "bucket_name": your_existing_bucket.bucket_name,
            "function_name": your_lambda_function.function_name,
            "function_arn": your_lambda_function.function_arn,
            "prefix": "prefix/",
            "suffix": "suffix"
        }
)
custom_resource.node.add_dependency(your_lambda_function.permissions_node.find_child("AllowS3Invocation"))

@eladb would it be useful to add support for this based on the implementation 鈽濓笍

A more compact lambda which could be inlined within the cloudformation template

import json
import urllib.request

import boto3

s3 = boto3.client("s3")


def handler(event, context):
    response_data = {}
    try:
        params = event["ResourceProperties"]
        bucket = params["bucket_name"]
        identifier = params["function_name"]

        if event["RequestType"] == "Delete":
            remove(bucket, identifier)
        elif event["RequestType"] in ["Create", "Update"]:
            update(
                bucket,
                identifier,
                params["function_arn"],
                params["events"],
                params["prefix"],
                params["suffix"],
            )
            response_data = {"bucket": bucket, "function_name": identifier}

        response_status = "SUCCESS"
    except Exception as e:
        print("Failed to process:", e)
        response_status = "FAILED"
        response_data = {"Failure": "Something bad happened."}

    send(event, context, response_status, response_data)


def get_bucket_configuration(bucket, identifier):
    configuration = s3.get_bucket_notification_configuration(Bucket=bucket)
    if "ResponseMetadata" in configuration.keys():
        del configuration["ResponseMetadata"]
    if "LambdaFunctionConfigurations" in configuration.keys():
        configs = configuration["LambdaFunctionConfigurations"]
        configuration["LambdaFunctionConfigurations"] = [item for item in configs if item["Id"] != identifier]
    else:
        configuration["LambdaFunctionConfigurations"] = []
    return configuration


def remove(bucket, identifier):
    s3.put_bucket_notification_configuration(
        Bucket=bucket,
        NotificationConfiguration=get_bucket_configuration(bucket, identifier),
    )


def update(bucket, identifier, function_arn, events, prefix, suffix):
    configuration = get_bucket_configuration(bucket, identifier)
    configuration["LambdaFunctionConfigurations"].append(
        {
            "Events": events.split(","),
            "LambdaFunctionArn": function_arn,
            "Filter": {
                "Key": {
                    "FilterRules": [
                        {"Name": "prefix", "Value": prefix},
                        {"Name": "suffix", "Value": suffix},
                    ]
                }
            },
            "Id": identifier,
        }
    )
    s3.put_bucket_notification_configuration(Bucket=bucket, NotificationConfiguration=configuration)


def send(event, context, _status, _data, physical_resource_id=None, no_echo=False):
    response_url = event["ResponseURL"]
    response_body = json.dumps(
        {
            "Status": _status,
            "Reason": f"See the details in CloudWatch Log Stream: {context.log_stream_name}",
            "PhysicalResourceId": physical_resource_id or context.log_stream_name,
            "StackId": event["StackId"],
            "RequestId": event["RequestId"],
            "LogicalResourceId": event["LogicalResourceId"],
            "NoEcho": no_echo,
            "Data": _data,
        }
    ).encode("utf-8")
    headers = {"content-type": "", "content-length": len(response_body)}
    try:
        req = urllib.request.Request(url=response_url, headers=headers, data=response_body, method="PUT")
        with urllib.request.urlopen(req) as response:
            print(response.read().decode("utf-8"))
        print("Status code: " + response.reason)
    except Exception as e:
        print("send(..) failed executing requests.put(..): " + str(e))

I don't see why this should not be supported we just need to:

  1. Move private readonly notifications: BucketNotifications; from Bucket to BucketBase
  2. Update NotificationsResourceHandler handler to support existing buckets which might include existing configurations.
  3. Update the permissions for the BucketNotificationsHandler to include s3:GetBucketNotification
import boto3, json, urllib.request

s3 = boto3.client("s3")


def handler(event, context):
    try:
        props = event["ResourceProperties"]
        bucket = props["bucket_name"]
        in_config = props["notification_configuration"]

        if event["RequestType"] == "Delete":
            config = load_config(bucket, in_config)
        else:
            config = merge_config(bucket, in_config)
        s3.put_bucket_notification_configuration(Bucket=bucket, NotificationConfiguration=config)

        response_status = "SUCCESS"
    except Exception as e:
        print("Failed to process:", e)
        response_status = "FAILED"

    submit_response(event, context, response_status)


def load_config(bucket, in_config):
    config = s3.get_bucket_notification_configuration(Bucket=bucket)
    if "ResponseMetadata" in config.keys():
        del config["ResponseMetadata"]
    filter_config(config, in_config, "TopicConfigurations")
    filter_config(config, in_config, "QueueConfigurations")
    filter_config(config, in_config, "LambdaFunctionConfigurations")
    return config


def filter_config(config, in_config, config_type):
    in_config.setdefault(config_type, [])
    if config_type in config.keys():
        configs, in_ids = config[config_type], ids(in_config[config_type])
        config[config_type] = [item for item in configs if item["Id"] not in in_ids]


def ids(in_configs):
    return [item["Id"] for item in in_configs if "Id" in item.keys()]


def merge_config(bucket, in_config):
    config = load_config(bucket, in_config)
    extend_config(config, in_config, "TopicConfigurations")
    extend_config(config, in_config, "QueueConfigurations")
    extend_config(config, in_config, "LambdaFunctionConfigurations")
    return config


def extend_config(config, in_config, config_type: str):
    config.get(config_type, []).extend(in_config[config_type])


def submit_response(event, context, response_status):
    response_body = json.dumps(
        {
            "Status": response_status,
            "Reason": f"See the details in CloudWatch Log Stream: {context.log_stream_name}",
            "PhysicalResourceId": event.get("PhysicalResourceId") or event["LogicalResourceId"],
            "StackId": event["StackId"],
            "RequestId": event["RequestId"],
            "LogicalResourceId": event["LogicalResourceId"],
            "NoEcho": False,
        }
    ).encode("utf-8")
    headers = {"content-type": "", "content-length": len(response_body)}
    try:
        req = urllib.request.Request(url=event["ResponseURL"], headers=headers, data=response_body, method="PUT")
        with urllib.request.urlopen(req) as response:
            print(response.read().decode("utf-8"))
        print("Status code: " + response.reason)
    except Exception as e:
        print("send(..) failed executing requests.put(..): " + str(e))

@michaelbrewer This can definitely be supported exactly as you suggested. If you are willing, we would definitely accept a contribution based on this implementation.

I see you already implemented this in python but just want to make sure you know our handler is written in typescript :)

@michaelbrewer This can definitely be supported exactly as you suggested. If you are willing, we would definitely accept a contribution based on this implementation.

I see you already implemented this in python but just want to make sure you know our handler is written in typescript :)

Sure i can port these changes to the typescript implementation (it just might be more lines of code ;-) )

@michaelbrewer This can definitely be supported exactly as you suggested. If you are willing, we would definitely accept a contribution based on this implementation.
I see you already implemented this in python but just want to make sure you know our handler is written in typescript :)

Sure i can port these changes to the typescript implementation (it just might be more lines of code ;-) )

Do you have an estimation of when these changes will be ported in and we can add new event sources with iBucket types?

@michaelbrewer This can definitely be supported exactly as you suggested. If you are willing, we would definitely accept a contribution based on this implementation.

I see you already implemented this in python but just want to make sure you know our handler is written in typescript :)

Do you have an estimation of when these changes will be ported in and we can add new event sources with iBucket types?

@michaelbrewer This can definitely be supported exactly as you suggested. If you are willing, we would definitely accept a contribution based on this implementation.

I see you already implemented this in python but just want to make sure you know our handler is written in typescript :)

Do you have an estimation of when these changes will be ported in and we can add new event sources with iBucket types?

I would hope it would work as part of iBucket. Hopefully I can get around to it on Monday.

@michaelbrewer This can definitely be supported exactly as you suggested. If you are willing, we would definitely accept a contribution based on this implementation.

I see you already implemented this in python but just want to make sure you know our handler is written in typescript :)

Do you have an estimation of when these changes will be ported in and we can add new event sources with iBucket types?

I would hope it would work as part of iBucket. Hopefully I can get around to it on Monday.

cool! Looking forward to being able to use iBucket seamlessly after

Are there any updates for the progress of this issue?

I see I got a few upvotes so I'll post my solution here. Taken from a few different answers on the matching stackoverflow post

#!/usr/bin/env python3

from aws_cdk import (
    core,
    aws_s3,
    aws_lambda,
    aws_iam,
    custom_resources
)

region = 'us-west-2'
bucket_name_to_import = 'my_existing_bucket'

app = core.App()

my_stack = core.Stack(app, "stack")
lambda_function = aws_lambda.Function(
    my_stack,
    "lambdaFunction",
    code=aws_lambda.Code.from_inline("whatever"),
    handler="index.handler",
    runtime=aws_lambda.Runtime.NODEJS_10_X,
)


s3_bucket = aws_s3.Bucket.from_bucket_name(
    my_stack, 'imported-bucket', bucket_name_to_import)


lambda_function.add_permission(
    's3-trigger-lambda-s3-invoke-function',
    principal=aws_iam.ServicePrincipal('s3.amazonaws.com'),
    action='lambda:InvokeFunction',
    source_arn=s3_bucket.bucket_arn)

notification_resource_id = 's3-notification-resource-upload-'+s3_bucket.bucket_name
bucket_notification_config = custom_resources.AwsSdkCall(
    service="S3",
    action="putBucketNotificationConfiguration",
    parameters={
        "Bucket": s3_bucket.bucket_name,
        "NotificationConfiguration": {
            "LambdaFunctionConfigurations": [{
                "Events": ['s3:ObjectCreated:*'],
                "LambdaFunctionArn": lambda_function.function_arn,
                "Filter": {
                    "Key": {
                        "FilterRules": [{'Name': 'prefix', 'Value': 'test/'}]
                    }
                }
            }]
        }
    },
    physical_resource_id=custom_resources.PhysicalResourceId.of(
        notification_resource_id),
    region=region
)
custom_s3_resource = custom_resources.AwsCustomResource(
    my_stack,
    's3-incoming-documents-notification-resource',
    policy=custom_resources.AwsCustomResourcePolicy.from_statements([
        aws_iam.PolicyStatement(
            effect=aws_iam.Effect.ALLOW,
            resources=['*'],
            actions=['s3:PutBucketNotification']
        )
    ]),
    on_create=bucket_notification_config,
    on_update=bucket_notification_config,
    on_delete=custom_resources.AwsSdkCall(
        service="S3",
        action="S3:putBucketNotificationConfiguration",
        physical_resource_id=custom_resources.PhysicalResourceId.of(
            notification_resource_id),
        parameters={
            "Bucket": s3_bucket.bucket_name,
            "NotificationConfiguration": {},
        }
    ))

custom_s3_resource.node.add_dependency(lambda_function)
custom_s3_resource.node.add_dependency(s3_bucket)

app.synth()

the only issue I'm running into with this code is that cdk destroy fails to delete the event on the imported bucket.

Thank you @alex9311 for your workarond, I have tried it but I get the following error:

4/6 | 10:29:30 AM | CREATE_FAILED | Custom::AWS | s3-incoming-documents-notification-resource/Resource/Default (s3incomingdocumentsnotificationresource167874AA) Failed to create resource. awsService[call.action] is not a function
new CustomResource (/tmp/jsii-kernel-IIijKo/node_modules/@aws-cdk/core/lib/custom-resource.js:27:25)
_ new AwsCustomResource (/tmp/jsii-kernel-IIijKo/node_modules/@aws-cdk/custom-resources/lib/aws-custom-resource/aws-custom-resource.js:164:31)
_ /usr/lib/python3.8/site-packages/jsii/_embedded/jsii/jsii-runtime.js:7934:49
.....
4/6 | 10:29:31 AM | UPDATE_ROLLBACK_IN_P | AWS::CloudFormation::Stack | cdk The following resource(s) failed to create: [s3incomingdocumentsnotificationresource167874AA].
did you face with this problem? Do you know how to fix it or any clue to fix it? Thank you!

Are there any updates for the progress of this issue?

I will start working on this tomorrow and put up a Draft PR as soon as possible for people to review the UX of the changes.

also waiting on this, just started working on a workaround, happy to see its in development

I am also waiting for it. I prefer not to use a workaround. Thanks in advance!

Also waiting on this be interested to hear of any progress. Thanks

Curious if this is being worked on, we are designing an interim solution based on this functionality being available in the near future. Would be great if there was some confirmation that this will be available before doing so?

Curious if this is being worked on, we are designing an interim solution based on this functionality being available in the near future. Would be great if there was some confirmation that this will be available before doing so?

Been working on this on and off. Our current solution involved using a python based lambda as the custom resource. So i have to back port this as javascript lambda that CDK is currently using. Hopefully i will have a PR for this soon for people to at least review and see if it gets merged.

Thanks @michaelbrewer, trying to understand, so you created a python lambda that adds notifications onto S3 buckets and created a nice wrapper for it but its only available for python CDK is that right?

We have a typescript CDK project, with python lambdas. Wondering will these changes only allow us to create a notification from S3 to Lambda or would it work to create S3 to SQS on an imported bucket?

Any updates on this? :(

Hopefully soon, as work / life balances out.

Some initial code changes for the custom resource lambda is in PR: https://github.com/aws/aws-cdk/pull/11773

I will need to update the BucketBase to include the addEventNotification

@michaelbrewer I'm not a very strong Typescript developer, but I would love to offer some pairing for this functionality. This feels super helpful for unlocking a lot of functionality at my company and would love to contribute how I can.

@michaelbrewer I'm not a very strong Typescript developer, but I would love to offer some pairing for this functionality. This feels super helpful for unlocking a lot of functionality at my company and would love to contribute how I can.

Thanks for the offer. I have started on the PR over here: https://github.com/aws/aws-cdk/pull/11773

I think i have custom resource updates in place, i just need to add more test cases. Just need to get my local environment working or run this on Gitpod

I think i have custom resource updates in place, i just need to add more test cases. Just need to get my local environment working or run this on Gitpod

I just took a peak over in Gitpod and it seems there is some type of issue with an Id attribute addition to the Topic Configuration which is not an attribute of CloudFormation seen here. I tried to trace it down beyond that, but couldn't discern much other than the failing notifications.test.js from the test suite.

Apologies if this is unhelpful or bothersome, but thought I would do what digging I could and report out if it helps.

I think i have custom resource updates in place, i just need to add more test cases. Just need to get my local environment working or run this on Gitpod

I just took a peak over in Gitpod and it seems there is some type of issue with an Id attribute addition to the Topic Configuration which is not an attribute of CloudFormation seen here. I tried to trace it down beyond that, but couldn't discern much other than the failing notifications.test.js from the test suite.

Apologies if this is unhelpful or bothersome, but thought I would do what digging I could and report out if it helps.

Thanks for having a look. It might be that i can pass in the function arn as the id, currently i don't need this in this version of the implementation.

I will see if i can build things on my Ubuntu machine, Macos does not seem to do it

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jpmartin2 picture jpmartin2  路  27Comments

clareliguori picture clareliguori  路  30Comments

rclark picture rclark  路  49Comments

laimonassutkus picture laimonassutkus  路  34Comments

eladb picture eladb  路  52Comments