Amplify-cli: What’s the best workflow for multi-env?

Created on 11 May 2020  Â·  14Comments  Â·  Source: aws-amplify/amplify-cli

Which Category is your question related to?
multi-env

Amplify CLI Version
4.19

What AWS Services are you utilizing?
API, Auth, Lambda, DynamoDB, S3

I realize that this is partially an Amplify Console question, but there is overlap, so I’m asking this here first, because my confusion mainly revolves around multi-env on my local.

I’ve read Team workflows with Amplify CLI backend environments, and it’s what I’m trying to follow as a guide.

And I'm admittedly confused, as I write this, so kindly keep that in mind.

If I’m working in an existing prod env that has been created with multiple back-end resources (e.g. auth, lambda, storage, etc.), and I do a amplify env add, adding a new develop env, followed by a amplify push, it modifies several files in the existing git branch (currently master) on my local that are specific to the new develop environment.

These include:

  • team-provider-info.json
  • cloud formation templates for lambdas
  • aws-exports.js

team-provider-info.json seems fine to commit and push to master, and aws-exports.js is excluded so that’s okay too. But, from what I can tell, the cloud formation templates are updated with new values for the S3 deployment bucket.

If I’m in the master branch (git) when I do this, and I push those changes to an Amplify Console connected repo, won’t this result in the current front-end in the Amplify console talking to the back-end resources for the newly created amplify env?

If I’m in a develop branch when I add and push the new env via amplify cli, the same file changes occur, and connecting the new branch in the Amplify console results in a separate app talking to the correct backed, with master left as-is. All good!

Here’s where I get confused: I now want to merge develop to master.

If I do this, the aforementioned file changes will overwrite those in master, and I fear this will cause master to talk to the wrong back-end.

If I’m incorrect about this assumption, and checking these env related changes, please set me straight. I'm asking because, I’ve recently had an incident where I’ve deployed a front-end and it was talking to an incorrect back-end, but I’m not completely sure how exactly I worked that piece of magic.

If what I’ve described is correct, what is the workflow to be used to ensure that the correct cloud formation templates are committed to the correct branches so that deployment front-ends are configured with the correct back-ends?

multienv question

Most helpful comment

  1. We're working on a better way to do this - but at this point this is the way we're handling uploading updates to the Lambda function via CloudFormation.

@kaustavghosh06 Have you guys had any progress on this? Is this a matter of ignoring the cloudformation file when doing the directory change hash computation, or something a little more nuanced?

All 14 comments

I have run into the exact same problem, where when i run a git merge on my test branch to merge dev changes the S3 deployment bucket gets overridden in my cloudformation-template.json templates for my lambda functions. I am quite confused why this is happening even when i have followed the steps outlined on the Amplify Docs where i first checkout the git branch and then checkout the Amplify env before performing the merge.

Hi @grigull, I'm glad I'm not alone in my confusion :)

My assumption is that my confusion is based on ignorance, so I'm hoping for some nice guidance via some good docs.

@dabit3 did a really nice write up here, which elaborates on multi-env, but I still haven't nailed down a workflow that makes sense to me.

When deploying via a connected repo in Amplify Console, the default build calls the amplifyPush.sh script found here.

For existing environments, the script should hit the else clause at line 26, if I'm not mistaken, and perform an import of ${ENV}, using ${STACKINFO} as its config (among other things) to do an amplify init.

I haven't dug into what ${STACKINFO} actually is yet, during a mismatched scenario, but if the connected branch has cloud formation templates from another env, because they've been (git) pushed to a connected repo branch after an amplify push via CLI (which alters cloud-formation templates) then this could be the cause of a front/back-end mismatch, but again, I'm not sure.

I've monkeyed with additional steps, like: amplify env checkout prod with an amplify pull after merging from develop and before pushing the master branch to the repo, but I feel like I'm over complicating things and most likely am not seeing the big picture correctly. One could just stash or discard the changes before pushing to the repo, as well, in some scenarios. But again, I'm not sure exactly what the workflow should be, which is why I'm seeking some official guidance.

If there are additional steps to be done, I'm wondering how I'm gonna do them if the PRs and merges from feature => develop => master happen in Github, Bitbucket, or the like, and not on the local. I'm guessing the only place to really handle this is in the build script, but let's see if we can get some awesome pointers from some awesome Amplifyer(s).

Lastly, I want to mention that while the majority of what I just wrote involves Amplify Console, the workflow begins with amplify CLI, so that's why we're here.

Looks like the exact same problem, doesn't look like it is a lack of documentation but rather a bug in the CLI.
https://github.com/aws-amplify/amplify-cli/issues/1194

I just went through the deploy of a new feature in a feature branch, followed by a merge to develop, which I flailed about to much while doing, so I couldn't document it.

Let me see if I can document the develop/test to master/prod effort. Hopefully, if might help clarify, what I'm talking about even when I sometimes don't know what I'm talking about.

Scenario: develop branch has new code related to adding a graphQL API. It's back-end, test, has been pushed to the cloud and works with a front-end also running in the cloud (via an Amplify Console connected git branch).

I would normally do a PR and merge to develop to master in Github/Bitbucket, but I'll do this one manually on the local.

$ git checkout master
$ amplify env checkout prod
√ Initialized provider successfully.
Initialized your environment successfully.

```console
$ amplify status

Current Environment: prod

| Category | Resource name | Operation | Provider plugin |
| -------- | ---------------------------- | --------- | ----------------- |
| Api | newGraphqlAPI | Create | awscloudformation |
| Auth | xxxxxxxxxxxxxxxxxx | Update | awscloudformation |
| Function | xxxxxxxxxxxxxxxxxx | Update | awscloudformation |
| Function | xxxxxxx | Update | awscloudformation |
| Function | xxxxxxxxxxxxxxx | Update | awscloudformation |
| Function | xxxxxx | Update | awscloudformation |
| Function | xxxxxxxxxxxx | Update | awscloudformation |
| Function | xxxxxxxx | Update | awscloudformation |
| Function | xxxxxxxxxxxxxxxxxx | Update | awscloudformation |
| Function | xxxxxxxxxxxxxxxxxxxxxxxxxxxx | Update | awscloudformation |
| Function | xxxxxxxxxxxxxxxxxxx | Update | awscloudformation |
| Storage | xxxxxxxxxxxxxxxxxxxx | No Change | awscloudformation |
| Storage | xxxxxxxxxxxxxxx | No Change | awscloudformation |
| Storage | xxxxxxxxxxxxxxxxxxx | No Change | awscloudformation |
| Storage | xxxxxxxx | No Change | awscloudformation |
| Api | xxxxxxxxxxxxx | No Change | awscloudformation |

Okay, so we're going to create the new API, which is good and expected, but all of the Updates are related to `develop`, not `prod` at this point.  For example, the S3 deployment buckets in `function-cloudformation-template.json` are still pointing at the `test` back-end env.

If I do an `amplify pull` to update these, I'll lose the `create` on the new graphQL API resource, so I'll push first...

This creates the new graphQL API resource on the `prod` back-end, but much of the `prod` config is has been changed to now to use the `test` back-end.

The amplify push operation results in file changes on the local, including: cloud formation templates, `aws-exports.js`, and `schema.json` (because of the new graphQL bit).  All files seem to point to prod.

Okay, I still have the **code changes** from `develop` that need to get to `master`.  Should I have merged `develop` to `master` before doing the `amplify checkout env prod` and/or `amplify push`?  Too late!  

Let's merge:
```console
$ git merge develop master

Now let's pull

$ amplify pull

Now commit all changes and git push to master branch in repo. This kicks off build in Amplify Console (which fails due to this known issue).

Disconnect master branch front-end in Amplify Console and reconnect to prod back-end.

This works, yet the questions remain:

  1. Is there a better way to do this, work-flow-wise? I don't expect Amplify to do everything, but it seems like this could be more straight-forward or automated a bit more.
  2. How can this be done when the PR and merge happen in Github/Bitbucket?

PS: I do seem to lose the "Sign in with Apple" selections in the user pool each time I do this, but that's a different issue, which I'll raise separately, if it's not me doing something dumb.

Okay, here's another try (simpler), where I do the merge before the pull...

  1. In the feature branch, after making schema updates to graphql API (and other code changes)
  2. amplify push. There should be no changes.
  3. Check feature branch changes into repo
  4. Checkout develop branch (git)
  5. Merge feature to develop
  6. Checkout develop env (amplify)
  7. Pull develop env (amplify)
  8. This wipes out schema changes made in feature branch (mentioned in item 1 above)
  9. Manually copy schema.graphql from repo feature branch (probably a better way to do this) into develop branch
  10. amplify push
  11. Check develop changes into git repo. This kicks off build

This works, except for my slip up with wiping out the schema changes.

I feel this is getting close to a workable, work-flow, if I can just iron out the kinks.

@kimfucious I beleive there are a few issues you're seeing here:

  1. The update to the Lambda/Function CloudFormation files when switching environments
    This is due to the dynamic S3Bucket and key which is required in the CloudFormation template for the CloudFormation service to detect for any changes and deploy the Lambda function. This is generated dynamically on every push and due to this change you can see an "update" status when switching environments since these values are different between environments. This shouldn't affect any of your deployments.
    We're working on a better way to do this - but at this point this is the way we're handling uploading updates to the Lambda function via CloudFormation.

  2. Sign-in With Apple
    The Amplify CLI doesn't support Sign-in with Apple feature and since the CLI uses Infrastructure as Code via CloudFormation - any changes you make outside the CloudFormation files i.e using the AWS Console would be lost on your next deployment using the Amplify CLI.

Is there any other specific issues that I can help you out with?

Hi @kaustavghosh06 thanks for the reply.

The root of my question is: what's the best work-flow for multi-env?

As you can see from my prior posts, I'm working toward one, but I seem to keep running into situations where a git merge overwrites something pulled from amplify or vice-a-versa.

One specific example, is where I lose graphql schema changes (see here).

I realize that a big part of my issue is me being ignorant, which is why I was hoping for some guidance for a workflow beyond what's currently documented here and here.

I am taking notes as I work through each attempt, which will hopefully get me to a smooth work-flow, but I figure you folks do this every day and might have some hot tips.

Regarding Sign In With Apple, I've raised a separate issue, and will track there.

I use a different AWS account for each environment. There are still issues with bucket name changes, but as a team we know not to push those changes. The frontend uses Dot env files to hold configuration about the environment, such as endpoints and feature flags. This means when creating a new environment we copy values out of AWS-exports into our dot files. Each dev has a dot file and each shared environment has one.

I’m not a fan of putting dev and production in the same AWS account, these environments often have different security requirements. Poking around the console in a personal account and sometimes dev to troubleshoot or learn is fine, but prod needs different IAM permissions to avoid mishaps and accessing data where there is no legitimate need. It also allows practicing DR with less fear of hurting production, and account level alerts don’t fire off when a dev is working out a problem in their personal account.

The workflow for features is to work in a personal account, manually pushing changes as development progresses, PR merge into shared environment which auto-deploys changes, then a simple promotion to test/staging/production using git hooks to deploy into each environment.

  1. We're working on a better way to do this - but at this point this is the way we're handling uploading updates to the Lambda function via CloudFormation.

@kaustavghosh06 Have you guys had any progress on this? Is this a matter of ignoring the cloudformation file when doing the directory change hash computation, or something a little more nuanced?

@kevcam4891 It's a bit more nuanced than that. Basically if we don't have any updates in the CloudFormation template itself for a changed Lambda function - then CloudFormation won't consider that as an update - hence we use a new hash for every new update to the function src code and place it in the CloudFormation template to force an update and deployment for Lambda. We can make this improvement by making this a runtime input parameter per environment maybe - and not include it in the CloudFormation file itself. cc @jhockett

It's been a while since I opened this, and I've learned a few things since then.

What I'm doing now (for this project) is the following:

  1. Do all dev work locally in the git "dev" (i.e. feature) branch in the amplify "dev" environment.
  2. I re-use the amplify "dev" environment, because working on a new feature branch requires the re-setup of lambas (e.g. env vars).
  3. I don't ever change amplify environments to "staging" or "prod" on my local, unless I've really buggered something and have to test something.
  4. When ready to go to the staging/prod branch, I commit and git push my changes to the "dev" branch in the repo.
  5. Then it's git checkout staging (or master), git merge dev, git push. This will kick off a build via Amplify Console, as I've connected the repo branches there.

One thing I didn't understand when I opened this was that the build process that happens in Amplify console does an amplify pull and then updates the backend. I was thinking that I needed to do this in the CLI and push those changes before doing step 5 above.

Usually this works flawlessly, but everyonce in a while, I do somthing dumb and wind up with some diffs in those template files when I merge from dev to staging or prod. While easy to fix, I think @RossWilliams's idea of excluding those might be the way forward.

@kimfucious Yup, adding those files to .gitignore to avoid the headache of viewing it in git updates is a good idea if you're not actively working on the resource. Closing this issue and goes without saying that if you have any other specific issues please feel free to open up a new issue in our repo!

@kimfucious I was struggling with this as well. I sort of have a similar setup. staging/prod/dev github branches. staging/prod/dev frontend and backend environments.

Just to understand more of your flow, lets say you are make changes to your functions:

  1. Make changes in dev env
  2. Commit changes to dev branch
  3. Merge changes to staging/prod branch ignoring that you are committing cloud formation files pointing to dev S3 bucket.

During build, amplify deployment will update staging/prod and point to the correct S3 bucket?

Hi @Abuitime,

I've recently removed all of the cloud formation templates from git, but I'll try to answer from memory, as I imagine it was happening.

  1. Do work in amplify feature environment (e.g. dev), on your local. At this point you'd also be working in a git branch that is separate from staging/prod.
  2. At this point, any amplify push commands will affect the backend that you are working against on your local, so this will create a bunch of those cloud formation templates locally.
  3. When ready to release to staging/prod, someone (you, if you're not on a big team) will perform a git merge from dev to staging/master and then push those changes to your repo.
  4. This will trigger a build in the Amplify Console, if you've connected the repo. And the build should produce and use all of the correct cloud formation templates for the environment that is connected to repo you've setup.
  5. This should ignore any "incorrect" cloud formation templates (i.e. those still pointing to dev) for the build.

Again this is from memory, as I can't reproduce this live right now, but perhaps @kaustavghosh06 may have the time to confirm or correct my memory here.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mwarger picture mwarger  Â·  3Comments

jexh picture jexh  Â·  3Comments

MageMasher picture MageMasher  Â·  3Comments

gabriel-wilkes picture gabriel-wilkes  Â·  3Comments

nicksmithr picture nicksmithr  Â·  3Comments