Describe the bug
It seems like the @connection
directives are broken in some fundamental way with respect to amplify push
.
To Reproduce
Steps to reproduce the behavior:
@connection
directivesamplify push
will stop working with the Resource is not in the state stackUpdateComplete
error. Stack Trace
I can post a stack trace the next time this happens.
Expected behavior
When I use Amplify, I have this constant fear that my app will fall into this push
state where it's completely unrecoverable. What I expect/would be nice is some way to back out, or rollback, to a point where everything works again. This has happened now a couple times now and the only way forward, which it now looks like I will have to do yet again, is to nuke the entire API and start over.
Something like amplify api [name] sync
where it gives you a list of previous successful updates, from which you can choose a rollback point.
This is related to #982 same happened to me.
Rollback would be nice. It would also be nice to get a --dry-run
option like git gives to see what changes would be made without actually modifying cloud resources.
I had something similar. After the initial push the updates to the schema need to be small.
As soon as I add more than one @connection directive the push failed. I believe it is due to some cloudformation update restrictions with dynamoDB.
Creating a new stack through amplify env add does not suffer from this.
Updating this:
I re-thought my schema and managed to simplify it (considerably) since I was encountering these problems. So far, so good. No errors.
Thinking on this more, it might be that recursive database relationships are currently allowed by Amplify, but should not be allowed. Put differently, in my case at least, this error seemed to stem from an improper understanding @connection
, which led to a recursive situation.
Feel free to close this issue.
The same thing happened to me as well. I had to checkout to a new environment for it to work. Seems weird. A rollback is a much-needed feature
Thanks for the feedback. I comment on the @connection issues under "Proposal 2" in this RFC (https://github.com/aws-amplify/amplify-cli/issues/1062) and hope this addresses the problem. I appreciate any feedback on the RFC.
So far, so good. No errors.
Scratch that.
It really just seems like currently this CLI is not suited towards iterative product development, where you are changing the model a lot.
What would be a good temporary strategy if this happens in a production environment?
Checking out to a new env would mean new datasources (which we dont want) and at the same point cannot stop updating the models if there is an urgent need.
@grudra7714 Under the Amplify paradigm, once you get into production, you'd be developing new schema changes on a different environment than master
. (Like development
). Doing that kind of stuff on master
will likely result in bad times.
Just to unpack this - the environments feature models itself (roughly) after git, where you'd check out a new branch and work on it before merging it back in. So, with Amplify, you'd "check out" a new environment, use it for a bit, and then merge it back in. They are not "different" data stores so much as "development" ones. As such, they are designed to be somewhat transient in nature.
It depends on your specific needs, so this is just my two cents, but if you don't want any environments in addition to master
, you might want to take a step back and consider if Amplify is the right choice in the long-term.
@grudra7714 Under the Amplify paradigm, once you get into production, you'd be developing new schema changes on a different environment than
master
. (Likedevelopment
). Doing that kind of stuff onmaster
will likely result in bad times.
@hew I'm also concerned about pushing new features to production. Even after coding and testing in a dev-something
environment, sometime the new schema will be merged with the old one - and at this point I feel that the problems will arise. Right now my plan is to deploy a new master with all the features, transfer the data from old to new database and finally change the clients endpoint.
During development iterations I did several changes to the API, but 99% of them were not pushed by amplify api push
. I believe the worst part is the time that the CLI take to figure the errors or send the updates. With ~10 types, the stack takes around 20 minutes to complete. Depending the stage it fails, it takes 30 minutes for one try.
I'm afraid that due some deadlines there won't be another option to deliver our product, but I'm counting the amplify-cli will evolve in such a way that the push will work flawlessly.
Depending the stage it fails, it takes 30 minutes for one try.
Worth noting that the newest version of the CLI (1.1.7) clears the previous builds, which were all being uploaded to S3 prior to 1.1.7, slowing things down considerably past the first couple pushes. If you have really slow pushes, this could be why.
I believe there is something else causing the slowdown, @hew. The delete push is roughly equivalent to the create push, even with brand new stacks.
It would be really nice if it were easier to spin up/down entire projects. If things were a bit more ephemeral.
At work, I have a branch where I've set up our monorepo (which includes amplify) to move all of the lambda logic out of amplify/backend/function/name/src/
, and into Lerna-managed /packages
. These "mirrors" are then installed back into their respective folders:
Besides Lerna itself, this has one main benefit:
This has a couple downfalls.
Because Amplify's update
feature is change-based, any changes to the Lerna-managed lambdas will not inherently be reflected in the amplify ones. You need to version-bump and reinstall the packages each time you want to update Amplify (some might consider this a good thing).
You are increasing overall complexity by bringing in another CLI tool.
With respect to this one:
You are increasing overall complexity by bringing in another CLI tool.
What would be nice is to move this kind of functionality out of a need for Lerna, where Amplify has some kind of structure like this:
amplify/
automated/
user-code/
Different names, probably, but basically the user-code
should be able to survive an amplify project delete
, which would clear out everything in automated
. Then, an amplify project recover
would spin up a _completely_ new project, but using the lambdas, graphQL schema, etc that are sitting in user-code
.
I'd even go for far as to say that the automated
stuff should even be git-ignored.
Most helpful comment
@hew I'm also concerned about pushing new features to production. Even after coding and testing in a
dev-something
environment, sometime the new schema will be merged with the old one - and at this point I feel that the problems will arise. Right now my plan is to deploy a new master with all the features, transfer the data from old to new database and finally change the clients endpoint.During development iterations I did several changes to the API, but 99% of them were not pushed by
amplify api push
. I believe the worst part is the time that the CLI take to figure the errors or send the updates. With ~10 types, the stack takes around 20 minutes to complete. Depending the stage it fails, it takes 30 minutes for one try.I'm afraid that due some deadlines there won't be another option to deliver our product, but I'm counting the amplify-cli will evolve in such a way that the push will work flawlessly.