Given the feedback so far for the Graphcool 1.0 beta it seems like we should remove stages again from the graphcool.yml file and handle stages as implicit information (like for example Serverless does). (Related: #1407)
When running gc deploy -s prod the CLI needs to know to which cluster the prod stage should be deployed. There are two cases:
The stage already exists/was already deployed: In this case the CLI needs to query all connected clusters (both cloud & non-cloud clusters) to retrieve the information to which cluster the service stage should be deployed. Since this information rarely changes, this can be cached in the global graphcool config ~/.graphcool under the serviceClusterCache namespace.
The stage doesn't exist yet: In case the service stage has never been deployed before (i.e. there is no cache entry in serviceClusterCache) the CLI should interactively ask for the cluster and proceed with the deployment. After the deployment, the CLI should create a new entry in the serviceClusterCache cache.
serviceClusterCache in ~/.graphcoolclusters:
local:
host: 'http://localhost:60000'
clusterSecret: ''
serviceClusterCache:
- service: my-app
stage: prod
cluster: local
hashId: cjbc91faq0000223t91yla6i9
hashWhen renaming or deleting a service/stage the serviceClusterCache becomes inconsistent. There are multiple potential ways to deal with this:
The idea here is to add the hashId field to the serviceClusterCache entries which uniquely identifies a deployed service stage across all clusters. On every deploy command the CLI also sends the hashId which the cluster compared with the hashId it has stored for the service. This can result in the following scenarios:
hashId which is now updated under serviceClusterCache. Deployment proceeds.To make the usage a bit more convenient when developing, the CLI assumes the dev stage as default if not otherwise provided via the -s/--stage deploy flag. This is a proven best practice by the serverless framework.
stages is removed from graphcool.ymlgraphcool.yml filegraphql creategraphcool.yml file (e.g. from the cloud console)serviceClusterCache rather live in a local (per service) cache file?Regarding Implementation (2):
(i.e. there is no cache entry in serviceClusterCache)
If the stage was added by another developer, then the stage would exist on a cluster, but not in the local cache.
In this case it would be bad to ask the developer to pick a cluster interactively:
I'm trying to understand the essence here, and for me it sounds like there are two things:
So you basically want to bring back service IDs as third identifier because after a rename name and stage differ and all our name/stage mapping assumptions fail. My take on this is that a rename should be nothing implicitly happening by some convoluted string magic in the cli and some arcane matching protocol with the server, but it should rather be an explicit mutation on the system API. The CLI can take care of that, which also means that it can rewrite the mapping file for that service. At this point I would strongly argue pro service-local mapping file, as it is checked into version control, so the rename propagates to other devs.
Result: No need for an additional ID.
Thoughts?
I would be in favour of not introducing caching at all at this time. Performance overhead is neglible and we can add it in the future.
@marktani: What prevents two clusters having an associated stage with the same name? Which cluster is being used in this case?
In theory that's possible, however, this should probably be avoided by the CLI if possible. I think this is a pretty contrived scenario. Let's see whether this actually turns out to be a problem.
Regarding Implementation (2):
(i.e. there is no cache entry in serviceClusterCache)
If the stage was added by another developer, then the stage would exist on a cluster, but not in the local cache.
In this case it would be bad to ask the developer to pick a cluster interactively:
it is extra work
if they pick the wrong cluster we will end up having the stage deployed to multiple clusters, which is bad.
@sorenbs that's exactly what Graphcool Cloud helps you with.
Implemented in latest beta. We now have a ~/.graphcool folder, which includes a config.yml and cache.yml.
What has been the ~/.graphcoolrc until now is now the config.yml, the cache.yml includes the cache
Most helpful comment
I would be in favour of not introducing caching at all at this time. Performance overhead is neglible and we can add it in the future.