Serverless: Can't subscribe to events of existing S3 bucket

Created on 16 Sep 2016  路  48Comments  路  Source: serverless/serverless

This is a Bug Report

Description

When specifying an already existing S3 bucket (xxx) in a function event, I get this error on deploy:

- xxx already exists.

I could not find documentation on how to subscribe to events of an existing S3 bucket in 1.0 rc1.

This worked in 0.5.

Most helpful comment

@flomotlik I feel like we need this support, but for a different reason than most of the other commenters and original requestor. They were coming at it mostly from the angle of "I don't want to / can't / whatever / use CloudFormation." I'm not coming at it from that angle.

For me, I want to be able to have one microservice (serverless service) that owns a bucket and performs operation X on the bucket, and another microservice (a seperate serverless service) that performs operation Y on the bucket. So the bucket is created in one service - but events on that bucket are subscribed to by multiple services.

It's a similar concept to why I needed to create the https://github.com/silvermine/serverless-plugin-external-sns-events/ plugin - in Serverless if you subscribe to an SNS event, Serverless tries to create the SNS topic, but that's not always desired.

I think in general Serverless will need to allow for subscribing to events of "existing" infrastructure - whether that's (very unfortunately) manually created like some of the other commenters, or whether that's cross-stack subscriptions.

Speaking of cross-stack subscriptions, maybe support could at least be added for cross-stack resource event subscriptions using Fn::ImportValue (as described here)?

All 48 comments

Its not possible to do this through Cloudformation. Basically you're not able to change existing infrastructure through CF.

On a more fundamental level I think its also best practice to create/change configuration of a resource only in one place. While we will probably have the functionality to add events to existing infrastructure I don't think its a good thing to do in general, as then you create very strong dependencies and expectations between different services, e.g. if you rename a piece of infrastructure or you want to rename the S3 bucket you would have to update every serverless service that references the bucket.

I'm not sure that it will be a good experience for users.

I'm going to close here as this functionality is not going to be resolved soon, but would be interested in getting your thought on how to integrate with existing infrastructure through Serverless.

:-( This has been a frequent use case in my projects on 0.5. There is quite often existing infrastructure that I want to integrate to a Serverless project. Sorry to hear this will no longer be possible.

Overall, it basically sounds like quite many things that 0.5 was an excellent tool for will no longer be possible in 1.0. It makes me wonder if they will be doable with plugins. I don't really like the old days of writing shell scripts, which Serverless 0.x pretty much eliminated.

There is a plugin for handling this https://github.com/matt-filion/serverless-external-s3-event It should do what you want @kennu

As @andymac4182 said definitely possible. (btw didn't know this one existed, good find)

I'm basically just VERY hesitant to introduce features into Serverless that bind a stack to existing infrastructure, because it will very quickly lead to issues with other infrastructure being renamed, removed and how Serverless then tries to sync this.

Now of course to introduce Serverless into an existing infrastructure this has to be possible to some degree, so we'll have some of those features (or introduce community plugins that do this), but I'm not sure if it makes sense to put it in the core.

@andymac4182 Thanks, I will look at it.

@flomotlik I've experienced the joys of updating CloudFormation stacks and seeing them enter invalid states or accidentally deleting resources, and it's the reason why I keep advocating against using it for too complex things in the first place. But this discussion has already been had earlier. :-)

@flomotlik I get what you are saying about dependency binding, but as @kennu mentioned, I think it's unrealistic to require users to use Cloud Formation in order to subscribe to s3/sns/etc. events. I work for a company that has 5 developers, of which I am one. At this point, we don't have time to learn and write Cloud Formation templates.

The plugin that @andymac4182 is very buggy.

@bohnman Totally get that this isn't an option or worthwhile investment for everyone, but we have to focus on where we can make the most difference and implementing Cloudformation has allowed us to move much faster than without it and many small, medium and very large companies are using it. This allows us as a project but also as a business to grow into those companies because there is some standard that we can easily follow and build upon.

If we had enough resources to do all of it I'd like to, but we have to focus and integrating with the AWS API and managing the state of resources ourselves is simply something that is a lot of work with very little upside for us and many (probably most) of our users, so we can't do it for the most part at the moment.

I am having the same issue. Unable to subscribe to existing S3 bucket.

I have check this plugin https://github.com/matt-filion/serverless-external-s3-event and it is very buggy.

Is their an alternative solution to this.

Warm regards,
Javed Gardezi

@jgardezi thanks for replying. 馃憤
Could you share your serverless.yml file so that we can take a look?

Thanks in advance!

Thank for the quick reply

When I used this plugin https://github.com/matt-filion/serverless-external-s3-event my serverless.yml looked like this

service: uploaded

frameworkVersion: ">=1.1.0 <2.0.0"

plugins:
  - serverless-external-s3-event

provider:
  name: aws
  runtime: nodejs4.3

  stage: dev
  region: ap-southeast-2

  iamRoleStatements:
    - Effect: "Allow"
      Action:
        - "s3:ListBucket"
        - "s3:PutObject"
      Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ]  }
    - Effect: "Allow"
      Action:
        - "s3:PutBucketNotification"
      Resource:
        Fn::Join:
          - ""
          - - "arn:aws:s3:::mybucket"

functions:
  pgsave:
    handler: postgresql.pgsave
    timeout: 60
    events:
    - existingS3:
        bucket: mybucket
        bucketEvents:
          - s3:ObjectCreated:*
````

Without the above plugin my serverless.yml looked like below.

```yml
service: uploaded

frameworkVersion: ">=1.1.0 <2.0.0"

provider:
  name: aws
  runtime: nodejs4.3

  stage: dev
  region: ap-southeast-2

functions:
  pgsave:
    handler: postgresql.pgsave
    events:
    - s3:
        bucket: mybucket
        event: s3:ObjectCreated:*

The error I was getting without the plugin yml file

.........Serverless: Deployment failed!

  Serverless Error ---------------------------------------

     An error occurred while provisioning your stack: S3BucketDevcollectory
     - mybucket already exists.

  Get Support --------------------------------------------
     Docs:          docs.serverless.com
     Bugs:          github.com/serverless/serverless/issues

  Your Environment Information -----------------------------
     OS:                 darwin
     Node Version:       4.3.2
     Serverless Version: 1.2.1

Is it something I am missing?

Thanks for that @jgardezi 馃憤
I just read through the thread and noticed that it's currently not possible through CloudFormation 馃槥 .

Maybe the plugin author / plugin users can chime in here and help?!

No worries @pmuens

I have already opened the issue https://github.com/matt-filion/serverless-external-s3-event/issues/2 on the plugin project page.

Awesome! Thanks for doing that @jgardezi 馃憤

This makes it easier for others to resolve this issue if they find it here!

@flomotlik I feel like we need this support, but for a different reason than most of the other commenters and original requestor. They were coming at it mostly from the angle of "I don't want to / can't / whatever / use CloudFormation." I'm not coming at it from that angle.

For me, I want to be able to have one microservice (serverless service) that owns a bucket and performs operation X on the bucket, and another microservice (a seperate serverless service) that performs operation Y on the bucket. So the bucket is created in one service - but events on that bucket are subscribed to by multiple services.

It's a similar concept to why I needed to create the https://github.com/silvermine/serverless-plugin-external-sns-events/ plugin - in Serverless if you subscribe to an SNS event, Serverless tries to create the SNS topic, but that's not always desired.

I think in general Serverless will need to allow for subscribing to events of "existing" infrastructure - whether that's (very unfortunately) manually created like some of the other commenters, or whether that's cross-stack subscriptions.

Speaking of cross-stack subscriptions, maybe support could at least be added for cross-stack resource event subscriptions using Fn::ImportValue (as described here)?

We are also moving more to the direction of multiple CloudFront stacks, so that everything belongs to a stack but not all stacks are managed by Serverless (we don't want to delete data stacks when upgrading a service to 1.0 or 2.0). So I think that for new services, cross-stack event subscriptions are the thing.

In addition to that, there are of course many cases where buckets already exists outside CloudFormation (e.g. existing Kinesis Firehose or Elastic Transcoder setups) and it will be necessary to somehow subscribe to those events.

@kennu Thx for the feedback. FYI - working on cross-service referencing is a big priority of ours right now. If you have some thoughts on how you would like this work, please share them in our issues here or in Slack.

I got same issue here and found this article: https://aws.amazon.com/blogs/compute/fanout-s3-event-notifications-to-multiple-endpoints/
What AWS suggest us is to use single SNS notification when objectCreated and do subscribers (our lambdas). Is there a way to create S3 Resource that automatically will send message to SNS topic? Can't find any info about that.

For others who are visiting this link, the plug-in seems to have been patched in the last few days, and I can confirm that it works for me, as of 29-Jan-2017

https://github.com/matt-filion/serverless-external-s3-event

Follow the steps as mentioned in the github page, but you can only do step 2 after deployment of the function (unless your function already exists),

so the steps are,

  1. sls deploy

  2. aws lambda add-permission --function-name FUNCTION_NAME --region us-west-2 --statement-id ANY_ID --action "lambda:InvokeFunction" --principal s3.amazonaws.com --source-arn arn:aws:s3:::BUCKET_NAME --source-account YOUR_AWS_ACCOUNT_NUM

  3. sls s3deploy

@velulev why not use SNS?

Hi @XBeg9, SNS is a better option if the project uses that service too, but that may not be the case for everyone, which is the case for me too. With those limitations, I have shared my inputs, so that it may help someone in a similar situation to me :)

@velulev FWIW, the add-permission call can be replaced by the following configuration in serverless.yml

resources:
  Resources:
    LambdaInvokePermission:
      Type: AWS::Lambda::Permission
      Properties:
        Action: lambda:InvokeFunction
        FunctionName: service-stage-myLambdaFunctionName
        Principal: s3.amazonaws.com
        SourceArn: arn:aws:s3:::my-bucket

Also coming here with a similar usecase as @jthomerson. One service uploads to an existing bucket that I want serverless to be triggered on put events. @jthomerson how did you end up solving for it?

@ac360 @pmuens Is this possible now? If not, what is your recommendation for such a usecase and is this feature on your radar?

Thank you!

@ac360 @pmuens Is this possible now? If not, what is your recommendation for such a usecase and is this feature on your radar?

Heu @oyeanuj thanks for your comment 馃憤!

Just looked into the code and it looks like the s3 event doesn't support arn strings just yet (s3 is the oldest event source).

However we have https://github.com/serverless/serverless/issues/3212 in the making where we're planning to add support for arn detection to all event sources. This way you could reference S3 buckets based on arns with the help of e.g. the Serverless Variables system to make it more flexible.

You could export your bucket in the other stack as an Output and use Fn::ImportValue in combination with the solution @milancermak wrote here: https://github.com/serverless/serverless/issues/2154#issuecomment-299433986

That should work as well.

@pmuens Great, looking forward to the feature! It'd be super helpful to have an example to go for this usecase, if possible. Thank you!

Great to hear that @oyeanuj 馃憤

Here's a comment / thread which could be helpful: https://github.com/serverless/serverless/issues/3257#issuecomment-310290685.

You could use smth. like this but leave out the S3 bucket resource and only use the S3 bucket permission resource in your resources section.

Let us know if you need anything else!

what is the "sls s3deploy" equivalent in yarn
I tried running _yarn run s3deploy_ but it throws unknown command err

@nbk11kk Yarn runs commands listed in the scripts section of your package.json. You can add Serverless commands to your scripts. For example you may have

"scripts": {
  "s3deploy": "sls s3deploy"
}

Which you could then run with yarn s3deploy.

If you haven't installed Serverless globally and wish to call the command using npms bin directory

node node_modules/.bin/serverless s3deploy

My scripts file below

"scripts": {
"deploy": "serverless deploy --stage dev --verbose",
"predeploy": "yarn run build",
"build": "cd Build_Working && build.bat ETL",
"serverless": "serverless",
"invoke-etl": "serverless invoke -f ETL -d \"{}\" --log",
"invoke": "yarn run invoke-etl",
"deploy_function": "serverless deploy function --function ETL",
"predeploy_function": "yarn run build",
"s3deploy": "sls s3deploy",
"postdeploy": "yarn run s3deploy",
"remove": "serverless remove --stage dev --verbose"
}

error

$ sls s3deploy
Serverless Error ---------------------------------------
Serverless command "s3deploy" not found. Did you mean "deploy"?

@benswinburne :Am I doing anything wrong ??

You've installed this plugin with yarn and added serverless-external-s3-event to the plugins array in your serverless.yml file?

Yes I have installed and added to my .yml file

plugins:
  serverless-plugin-existing-s3
functions:
  ETL:
    handler: TP_S3_to_Redshift.lambda_handler
    package:
      artifact: ETL/artifacts/ETL.zip
    events:
      - existings3: 
          bucket: xyz
          event: s3:ObjectCreated:*
          rules:
            - prefix: logs/LAMBDA_INVOKE_TEST/

Not sure that syntax for plugins is correct. Try this

plugins:
  - serverless-plugin-existing-s3

@benswinburne : Thanks for the tip ..now it ran without any issues.But the trigger didn't get registered on AWS console when I open the lambda function.Should I set it up manually again??

Just to add to the above, I'm using serverless-plugin-existing-s3 on the current 2.0 branch and it is working great so far.

I agree with everyone who has mentioned that serverless-external-s3-event seems to presents some issues. In my particular case, I wanted to trigger an event of file upload to an S3 bucket that was existing prior to any of the new Serverless development, then did everything according to the specifications of the librarie's repo but I always got an error saying that there were no streams for the function, which meant that even though the lambda function had been deployed to AWS, the function itself never executed. That is why I have moved on to simply following Serverles' infrastructure config as they intended it: to let them handle all of the infraestructure.

To anyone who doesn't need to have an existing S3 bucket, I would suggest to go without the library in mention. Or, to try it and in case something fails, configure manually on the AWS console whatever doesn't work

could you help me please , how can i create an event for S3 bucket which already exists , i dont want to overwrite it
```Resources:
ProductFeedBucketEnv:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub ${ProductFeedBucket}

NotificationConfiguration:

LambdaConfigurations:

-

Function: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:function_name'

Event: "s3:ObjectCreated:*"

Filter:

S3Key:

Rules:

-

Name: suffix

Value: .json

LambdaInvokePermission:

Type: AWS::Lambda::Permission

Properties:

FunctionName: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:function_name'

Action: "lambda:InvokeFunction"

Principal: "s3.amazonaws.com"

SourceArn: !Sub 'arn:aws:s3:::${ProductFeedBucket}'

Hi @michaelm88 . I got to use an existing S3. Hope this helps you, I did it a bit ago:

handler.js

'use strict';

module.exports.trackerFun = async (event, context) => {
  console.log('event: ' + event)
  console.log('event.Records: ' + event.Records);
  console.log('event.Records[0]: ' + event.Records[0].s3);
  return {
    statusCode: 200,
    body: JSON.stringify({
      message: 'This is indeed getting the event of uploading an image to the S3!',
      input: event,
    }),
  };

};

serverless.yml

# Happy Coding!
service: fixed-bucket-attempt

provider:
  name: aws
  runtime: nodejs8.10
  stage: dev
  region: eu-west-1

iamRoleStatements:
  - Effect: "Allow"
    Action:
      - s3:*

functions:
  trackerFun:
    handler: handler.trackerFun
    events:
      - s3: S3NAMEGOESHERE

resources:
  Resources:
    S3BucketS3NAMEGOESHERE:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: S3NAMEGOESHEREResources
        # add additional custom bucket configuration here
    TrackerFunLambdaPermissionS3NAMEGOESHEREResourcesS3:
      Type: "AWS::Lambda::Permission"
      Properties:
        FunctionName:
          "Fn::GetAtt":
            - TrackerFunLambdaFunction
            - Arn
        Principal: "s3.amazonaws.com"
        Action: "lambda:InvokeFunction"
        SourceAccount:
          Ref: AWS::AccountId
        SourceArn: "arn:aws:s3:::S3NAMEGOESHEREResources"

Best regards!

Hi @Daniela0106

i can't see any difference in your serveless code with mine

Resources:
  ProductFeedBucketEnv:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub '${DiscountCodesBucket}'
      NotificationConfiguration:
        LambdaConfigurations:
          -
            Function: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:TEST'
            Event: "s3:ObjectCreated:*"
            Filter:
              S3Key:
                Rules:
                  - 
                    Name: suffix
                    Value: .csv
      # add additional custom bucket configuration here
  LambdaInvokePermission:
    Type: "AWS::Lambda::Permission"
    Properties:
      FunctionName: !Sub 'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:TEST'
      Principal: "s3.amazonaws.com"
      Action: "lambda:InvokeFunction"
      SourceAccount: !Sub '${AWS::AccountId}'
      SourceArn: !Sub 'arn:aws:s3:::${DiscountCodesBucket}'

15:12:08 UTC+0000 | UPDATE_FAILED | AWS::S3::Bucket | ProductFeedBucketEnv | plt-mike-test already exists
-- | -- | -- | -- | --
聽 | 15:12:07 UTC+0000 | UPDATE_IN_PROGRESS | AWS::S3::Bucket | ProductFeedBucketEnv | Requested update requires the creation of a new physical resource; hence creating one.

@michaelm88 Did you take a look at SourceArn: !Sub 'arn:aws:s3:::${ProductFeedBucket}' ? SourceArn is different

@Daniela0106 it is the same as your , the bucket name , the bucket ARN

I've just stumbled upon this issue, after trying out to add an event to an existing bucket for hours. It should clearly be stated in the docs that this is not supported :-/

I also think this is a major limitation. My app needs the bucket for other purposes, too, which are not linked to the serverless usecase. It also has to be configured in a certain way, e.g. there has to be a specific CORS config applied to it etc. pp. Requiring the whole setup to be done in serverless creates a heavy dependency here.

I am also trying to use existings3 function to use the already created S3 bucket. When a file uploaded to my S3 bucket, event triggers Serverless. But my Serverless function failing to execute as it says already exists(using - serverless-plugin-existing-s3 plugin). Below is my code:

service: test-serverless
provider:
  name: aws
  runtime: python3.7
  iamRoleStatements:
    - Effect: Allow
      Action:
       - s3:Put*
       - s3:Get*
      Resource: "arn:aws:s3:::test_bucket/*"
functions:
  test-serverless:
    handler: handler.lambda_handler
    events:
      - existingS3:
          bucket: test_bucket
          events: 
            - s3:ObjectCreated:*
          rules:
            - suffix: -testing.log
plugins:
   - serverless-python-requirements
   - serverless-plugin-existing-s3 plugin
custom:
  pythonRequirements:
     dockerizePip: non-linux

Function is not working as it sounds. Any help..!!!???

I ended up setting the trigger manually in the Lambda console. Far from perfect, because you have to remember that on any new setup, but works.

Not sure that syntax for plugins is correct. Try this

plugins:
  - serverless-plugin-existing-s3

Buggy plugin

Still having to manually set up the trigger myself. Tried deleting my entire stack and recreating and did not work :(

Same here, any help?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

horike37 picture horike37  路  43Comments

marckaraujo picture marckaraujo  路  105Comments

StephanPraetsch picture StephanPraetsch  路  60Comments

lfreneda picture lfreneda  路  54Comments

olegZastavnyi picture olegZastavnyi  路  56Comments