Prisma1: Remove stages from graphcool.yml & cluster caching

Created on 18 Dec 2017  Â·  7Comments  Â·  Source: prisma/prisma1

Given the feedback so far for the Graphcool 1.0 beta it seems like we should remove stages again from the graphcool.yml file and handle stages as implicit information (like for example Serverless does). (Related: #1407)

Implementation

When running gc deploy -s prod the CLI needs to know to which cluster the prod stage should be deployed. There are two cases:

  1. The stage already exists/was already deployed: In this case the CLI needs to query all connected clusters (both cloud & non-cloud clusters) to retrieve the information to which cluster the service stage should be deployed. Since this information rarely changes, this can be cached in the global graphcool config ~/.graphcool under the serviceClusterCache namespace.

  2. The stage doesn't exist yet: In case the service stage has never been deployed before (i.e. there is no cache entry in serviceClusterCache) the CLI should interactively ask for the cluster and proceed with the deployment. After the deployment, the CLI should create a new entry in the serviceClusterCache cache.

Example for serviceClusterCache in ~/.graphcool

clusters:
  local:
    host: 'http://localhost:60000'
    clusterSecret: ''

serviceClusterCache:
- service: my-app
  stage: prod
  cluster: local
  hashId: cjbc91faq0000223t91yla6i9

Cache invalidation & hash

When renaming or deleting a service/stage the serviceClusterCache becomes inconsistent. There are multiple potential ways to deal with this:

  1. Don't cache at all → undesired
  2. Transfer cache invalidation responsibility to the developer → undesired
  3. Manage caching in a automatic and resilient way → desired

The idea here is to add the hashId field to the serviceClusterCache entries which uniquely identifies a deployed service stage across all clusters. On every deploy command the CLI also sends the hashId which the cluster compared with the hashId it has stored for the service. This can result in the following scenarios:

  1. Service/stage exists on cluster and hash ids match: Proceed with deployment
  2. Service/stage exists on cluster but hash ids don't match: Cluster returns the new hashId which is now updated under serviceClusterCache. Deployment proceeds.
  3. Service/stage doesn't exist on cluster: Cluster returns specific error and CLI prompts developer to choose a cluster to deploy the service to.

Default stage

To make the usage a bit more convenient when developing, the CLI assumes the dev stage as default if not otherwise provided via the -s/--stage deploy flag. This is a proven best practice by the serverless framework.

Reasons why stages is removed from graphcool.yml

  • Limits CI scenarios where you'd want to create "throw-away" stages without the need to write the graphcool.yml file
  • Friction in the getting started experience with boilerplates and graphql create
  • Doesn't allow features such as cloning & restoring services without writing the graphcool.yml file (e.g. from the cloud console)

Considerations

  • [ ] Should serviceClusterCache rather live in a local (per service) cache file?
arecli aredeploy

Most helpful comment

I would be in favour of not introducing caching at all at this time. Performance overhead is neglible and we can add it in the future.

All 7 comments

  • What prevents two clusters having an associated stage with the same name? Which cluster is being used in this case?

Regarding Implementation (2):

(i.e. there is no cache entry in serviceClusterCache)

If the stage was added by another developer, then the stage would exist on a cluster, but not in the local cache.

In this case it would be bad to ask the developer to pick a cluster interactively:

  1. it is extra work
  2. if they pick the wrong cluster we will end up having the stage deployed to multiple clusters, which is bad.

I'm trying to understand the essence here, and for me it sounds like there are two things:

  • It should be possible to rename name and stage of a service.
  • The CLI needs to know where to deploy to.

So you basically want to bring back service IDs as third identifier because after a rename name and stage differ and all our name/stage mapping assumptions fail. My take on this is that a rename should be nothing implicitly happening by some convoluted string magic in the cli and some arcane matching protocol with the server, but it should rather be an explicit mutation on the system API. The CLI can take care of that, which also means that it can rewrite the mapping file for that service. At this point I would strongly argue pro service-local mapping file, as it is checked into version control, so the rename propagates to other devs.

Result: No need for an additional ID.

Thoughts?

I would be in favour of not introducing caching at all at this time. Performance overhead is neglible and we can add it in the future.

@marktani: What prevents two clusters having an associated stage with the same name? Which cluster is being used in this case?

In theory that's possible, however, this should probably be avoided by the CLI if possible. I think this is a pretty contrived scenario. Let's see whether this actually turns out to be a problem.

Regarding Implementation (2):

(i.e. there is no cache entry in serviceClusterCache)

If the stage was added by another developer, then the stage would exist on a cluster, but not in the local cache.

In this case it would be bad to ask the developer to pick a cluster interactively:

it is extra work
if they pick the wrong cluster we will end up having the stage deployed to multiple clusters, which is bad.

@sorenbs that's exactly what Graphcool Cloud helps you with.

Implemented in latest beta. We now have a ~/.graphcool folder, which includes a config.yml and cache.yml.
What has been the ~/.graphcoolrc until now is now the config.yml, the cache.yml includes the cache

Was this page helpful?
0 / 5 - 0 ratings

Related issues

marktani picture marktani  Â·  3Comments

notrab picture notrab  Â·  3Comments

Fi1osof picture Fi1osof  Â·  3Comments

schickling picture schickling  Â·  3Comments

thomaswright picture thomaswright  Â·  3Comments