Current behavior
prisma deploy
to local server times out
Reproduction
I'm trying to set up a local deployment for the first time.
prisma init
Create new database
-> MySQL
docker-compose up -d
prisma deploy
Now it hangs at
$ prisma deploy
(node:41992) ExperimentalWarning: The fs.promises API is experimental
Creating stage default for service default β
Deploying service `default` to stage `default` to server `local`... β£Ύ
until it eventually times out with
β prisma deploy
(node:41992) ExperimentalWarning: The fs.promises API is experimental
Creating stage default for service default β
Deploying service `default` to stage `default` to server `local` !
ERROR: Whoops. Looks like an internal server error. Search your server logs for request ID: local:management:cjh1zrm1k00040799zacv109d
{
"data": {
"deploy": null
},
"errors": [
{
"message": "Whoops. Looks like an internal server error. Search your server logs for request ID: local:management:cjh1zrm1k00040799zacv109d",
"path": [
"deploy"
],
"locations": [
{
"line": 2,
"column": 9
}
],
"requestId": "local:management:cjh1zrm1k00040799zacv109d"
}
],
"status": 200
}
Get in touch if you need help: https://www.graph.cool/forum
To get more detailed output, run $ export DEBUG="*"
DEBUG version
β prisma deploy
config CWD /Users/nick/Desktop/foo +0ms
config HOME /Users/nick +2ms
config definitionDir /Users/nick/Desktop/foo +1ms
config definitionPath /Users/nick/Desktop/foo/prisma.yml +0ms
cli { isGlobal: true } +0ms
StatusChecker setting status checker +0ms
cli command id deploy +4ms
(node:42672) ExperimentalWarning: The fs.promises API is experimental
cli:plugincache Got plugin from cache +0ms
cli:plugincache /Users/nick/Library/Caches/prisma/plugins.json +0ms
cli:plugincache Got plugin from cache +1ms
cli:plugincache /Users/nick/Library/Caches/prisma/plugins.json +0ms
plugins findCommand prisma-cli-core +0ms
plugin requiring command +0ms
cli-engine:plugins:manager requiring /Users/nick/.nvm/versions/node/v10.1.0/lib/node_modules/prisma/node_modules/prisma-cli-core +0ms
portfinder:defaultHosts exports._defaultHosts is: [ '0.0.0.0', '127.0.0.1', '::1', 'fe80::1', '192.168.178.28', 'fe80::ec71:e8ff:fe1a:74d5', 'fe80::7e16:7ea4:acf2:fe56', 'fe80::456c:36e7:1d4d:e7f4', '192.168.99.1' ] +0ms
cli-engine:plugins:manager required +979ms
plugin required command +982ms
Environment saving global rc +0ms
StatusChecker setting status checker +1s
prisma definition making cluster here +0ms
client Initializing cluster client +0ms
deploy checking verions +1s
deploy checking if project exists +43ms
client Sending query to cluster local +46ms
client http://localhost:4466/management +0ms
client
client query($name: String! $stage: String!) {
client project(name: $name stage: $stage) {
client name
client stage
client }
client }
client +1ms
client { name: 'default', stage: 'default' } +0ms
client { project: { name: 'default', stage: 'default' } } +83ms
Deploying service `default` to stage `default` to server `local`... client Sending query to cluster local +2ms
client http://localhost:4466/management +0ms
client mutation($name: String!, $stage: String! $types: String! $dryRun: Boolean $secrets: [String!], $subscriptions: [FunctionInput!], $force: Boolean) {
client deploy(input: {
client name: $name
client stage: $stage
client types: $types
client dryRun: $dryRun
client secrets: $secrets
client subscriptions: $subscriptions
client force: $force
client }) {
client errors {
client type
client field
client description
client }
client warnings {
client type
client field
client description
client }
client migration {
client ...MigrationFragment
client }
client }
client }
client
client fragment MigrationFragment on Migration {
client revision
client steps {
client type
client __typename
client ... on CreateEnum {
client name
client ce_values: values
client }
client ... on CreateField {
client model
client name
client cf_typeName: typeName
client cf_isRequired: isRequired
client cf_isList: isList
client cf_isUnique: unique
client cf_relation: relation
client cf_defaultValue: default
client cf_enum: enum
client }
client ... on CreateModel {
client name
client }
client ... on CreateRelation {
client name
client leftModel
client rightModel
client }
client ... on DeleteEnum {
client name
client }
client ... on DeleteField {
client model
client name
client }
client ... on DeleteModel {
client name
client }
client ... on DeleteRelation {
client name
client }
client ... on UpdateEnum {
client name
client newName
client values
client }
client ... on UpdateField {
client model
client name
client newName
client typeName
client isRequired
client isList
client isUnique: unique
client relation
client default
client enum
client }
client ... on UpdateModel {
client name
client um_newName: newName
client }
client }
client }
client
client +0ms
client { name: 'default',
client stage: 'default',
client types: 'type User {\n id: ID! @unique\n name: String!\n}\n\n',
client dryRun: undefined,
client secrets: null,
client subscriptions: [],
client force: undefined } +1ms
...and then it hangs for a while and eventually times out with
!
Error: Whoops. Looks like an internal server error. Search your server logs for request ID: local:management:cjh1zz4bd00090799odsw4owb: {"response":{"data":{"deploy":null},"errors":[{"message":"Whoops. Looks like an internal server error. Search your server logs for request ID: local:management:cjh1zz4bd00090799odsw4owb","path":["deploy"],"locations":[{"line":2,"column":9}],"requestId":"local:management:cjh1zz4bd00090799odsw4owb"}],"status":200},"request":{"query":" mutation($name: String!, $stage: String! $types: String! $dryRun: Boolean $secrets: [String!], $subscriptions: [FunctionInput!], $force: Boolean) {\n deploy(input: {\n name: $name\n stage: $stage\n types: $types\n dryRun: $dryRun\n secrets: $secrets\n subscriptions: $subscriptions\n force: $force\n }) {\n errors {\n type\n field\n description\n }\n warnings {\n type\n field\n description\n }\n migration {\n ...MigrationFragment\n }\n }\n }\n \nfragment MigrationFragment on Migration {\n revision\n steps {\n type\n __typename\n ... on CreateEnum {\n name\n ce_values: values\n }\n ... on CreateField {\n model\n name\n cf_typeName: typeName\n cf_isRequired: isRequired\n cf_isList: isList\n cf_isUnique: unique\n cf_relation: relation\n cf_defaultValue: default\n cf_enum: enum\n }\n ... on CreateModel {\n name\n }\n ... on CreateRelation {\n name\n leftModel\n rightModel\n }\n ... on DeleteEnum {\n name\n }\n ... on DeleteField {\n model\n name\n }\n ... on DeleteModel {\n name\n }\n ... on DeleteRelation {\n name\n }\n ... on UpdateEnum {\n name\n newName\n values\n }\n ... on UpdateField {\n model\n name\n newName\n typeName\n isRequired\n isList\n isUnique: unique\n relation\n default\n enum\n }\n ... on UpdateModel {\n name\n um_newName: newName\n }\n }\n}\n\n ","variables":{"name":"default","stage":"default","types":"type User {\n id: ID! @unique\n name: String!\n}\n\n","secrets":null,"subscriptions":[]}}}
at GraphQLClient.<anonymous> (/Users/nick/.nvm/versions/node/v10.1.0/lib/node_modules/prisma/node_modules/graphql-request/src/index.ts:72:13)
at step (/Users/nick/.nvm/versions/node/v10.1.0/lib/node_modules/prisma/node_modules/graphql-request/dist/src/index.js:40:23)
at Object.next (/Users/nick/.nvm/versions/node/v10.1.0/lib/node_modules/prisma/node_modules/graphql-request/dist/src/index.js:21:53)
at fulfilled (/Users/nick/.nvm/versions/node/v10.1.0/lib/node_modules/prisma/node_modules/graphql-request/dist/src/index.js:12:58)
at process._tickCallback (internal/process/next_tick.js:68:7)
util timed out +0ms
Exiting with code: 0
Versions
β node -v
v10.1.0
β prisma -v
(node:42150) ExperimentalWarning: The fs.promises API is experimental
prisma/1.8.0 (darwin-x64) node-v10.1.0
β docker -v
Docker version 18.03.1-ce, build 9ee9f40
β docker-compose -v
docker-compose version 1.21.1, build 5a3f1a3
β npm -v
5.6.0
β docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
foo_mysql_1 docker-entrypoint.sh mysqld Up 3306/tcp
foo_prisma_1 /bin/sh -c /app/start.sh Up 0.0.0.0:4466->4466/tcp
β docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
196e6d8826d3 prismagraphql/prisma:1.8 "/bin/sh -c /app/staβ¦" 19 minutes ago Up 19 minutes 0.0.0.0:4466->4466/tcp foo_prisma_1
7bb85066985c mysql:5.7 "docker-entrypoint.sβ¦" 19 minutes ago Up 19 minutes 3306/tcp foo_mysql_1
9f492d95b91d mysql:5.7 "docker-entrypoint.sβ¦" 36 minutes ago Up 36 minutes 3306/tcp bar_mysql_1
docker-compose logs
These logs probably contain the error messages of at least 2 attempts.
β docker-compose logs
Attaching to foo_prisma_1, foo_mysql_1
prisma_1 | Exception in thread "main" java.sql.SQLTransientConnectionException: database - Connection is not available, request timed out after 5015ms.
prisma_1 | at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:548)
prisma_1 | at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:186)
prisma_1 | at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:145)
prisma_1 | at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:83)
prisma_1 | at slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:18)
prisma_1 | at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:439)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:47)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:38)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:218)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:217)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:38)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:239)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
prisma_1 | Caused by: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=mysql)(port=3306)(type=master) : Connection refused (Connection refused)
prisma_1 | at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:161)
prisma_1 | at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.connException(ExceptionMapper.java:79)
prisma_1 | at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connectWithoutProxy(AbstractConnectProtocol.java:1040)
prisma_1 | at org.mariadb.jdbc.internal.util.Utils.retrieveProxy(Utils.java:490)
prisma_1 | at org.mariadb.jdbc.MariaDbConnection.newConnection(MariaDbConnection.java:144)
prisma_1 | at org.mariadb.jdbc.Driver.connect(Driver.java:90)
prisma_1 | at slick.jdbc.DriverDataSource.getConnection(DriverDataSource.scala:101)
prisma_1 | at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:341)
prisma_1 | at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:193)
prisma_1 | at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:430)
prisma_1 | at com.zaxxer.hikari.pool.HikariPool.access$500(HikariPool.java:64)
prisma_1 | at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:570)
prisma_1 | at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:563)
prisma_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
prisma_1 | ... 3 more
prisma_1 | Caused by: java.net.ConnectException: Connection refused (Connection refused)
prisma_1 | at java.net.PlainSocketImpl.socketConnect(Native Method)
prisma_1 | at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
prisma_1 | at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
prisma_1 | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
prisma_1 | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
prisma_1 | at java.net.Socket.connect(Socket.java:589)
prisma_1 | at java.net.Socket.connect(Socket.java:538)
prisma_1 | at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connect(AbstractConnectProtocol.java:398)
prisma_1 | at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connectWithoutProxy(AbstractConnectProtocol.java:1032)
prisma_1 | ... 14 more
prisma_1 | Initializing workers...
prisma_1 | Successfully started 1 workers.
prisma_1 | Server running on :4466
prisma_1 | Version is up to date.
prisma_1 | {"@timestamp":"2018-05-11T13:22:36.365+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zrkbn00000799l1eiff4y\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-4","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | [Metrics] Warning: no metrics will be recorded.
prisma_1 | {"@timestamp":"2018-05-11T13:22:37.606+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zrkbn00000799l1eiff4y\",\"payload\":{\"request_duration\":\"1280\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-5","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:22:37.662+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zrlcu00010799j5dmysm0\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-10","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:22:37.689+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zrlcu00010799j5dmysm0\",\"payload\":{\"request_duration\":\"27\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-4","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:22:37.741+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zrlf000020799k09g7rom\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-20","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | Warning: Management API authentication is disabled. To protect your management server you should provide one (not both) of the environment variables 'CLUSTER_PUBLIC_KEY' (asymmetric, deprecated soon) or 'PRISMA_MANAGEMENT_API_JWT_SECRET' (symmetric JWT).
prisma_1 | {"key":"error/handled","requestId":"local:management:cjh1zrlf000020799k09g7rom","payload":{"exception":"com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found","query":"\n query($name: String! $stage: String!) {\n project(name: $name stage: $stage) {\n name\n stage\n }\n }\n ","variables":"{\"name\":\"default\",\"stage\":\"default\"}","code":"4000","stack_trace":"com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:139)\\n scala.Option.getOrElse(Option.scala:121)\\n com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:139)\\n scala.util.Success.$anonfun$map$1(Try.scala:251)\\n scala.util.Success.map(Try.scala:209)\\n scala.concurrent.Future.$anonfun$map$1(Future.scala:287)\\n scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)\\n scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)\\n scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)\\n akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)\\n akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)\\n scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)\\n scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)\\n akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)\\n akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)\\n akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:43)\\n akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\\n akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\\n akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\\n akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)","message":"No service with name 'default' and stage 'default' found"}}
prisma_1 | {"@timestamp":"2018-05-11T13:22:37.896+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zrlf000020799k09g7rom\",\"payload\":{\"request_duration\":\"156\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-9","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:22:37.929+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zrlk80003079971bkhqxc\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-5","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | Warning: Management API authentication is disabled. To protect your management server you should provide one (not both) of the environment variables 'CLUSTER_PUBLIC_KEY' (asymmetric, deprecated soon) or 'PRISMA_MANAGEMENT_API_JWT_SECRET' (symmetric JWT).
prisma_1 | {"@timestamp":"2018-05-11T13:22:38.528+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zrlk80003079971bkhqxc\",\"payload\":{\"request_duration\":\"599\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-5","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:22:38.552+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zrm1k00040799zacv109d\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-5","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | Warning: Management API authentication is disabled. To protect your management server you should provide one (not both) of the environment variables 'CLUSTER_PUBLIC_KEY' (asymmetric, deprecated soon) or 'PRISMA_MANAGEMENT_API_JWT_SECRET' (symmetric JWT).
prisma_1 | akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#2112742780]] after [300000 ms]. Sender[null] sent message of type "com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule".
prisma_1 | at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)
prisma_1 | at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)
prisma_1 | at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)
prisma_1 | at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)
prisma_1 | at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
prisma_1 | at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)
prisma_1 | at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
prisma_1 | {"key":"error/unhandled","requestId":"local:management:cjh1zrm1k00040799zacv109d","payload":{"exception":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#2112742780]] after [300000 ms]. Sender[null] sent message of type \"com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule\".","query":" mutation($name: String!, $stage: String! $types: String! $dryRun: Boolean $secrets: [String!], $subscriptions: [FunctionInput!], $force: Boolean) {\n deploy(input: {\n name: $name\n stage: $stage\n types: $types\n dryRun: $dryRun\n secrets: $secrets\n subscriptions: $subscriptions\n force: $force\n }) {\n errors {\n type\n field\n description\n }\n warnings {\n type\n field\n description\n }\n migration {\n ...MigrationFragment\n }\n }\n }\n \nfragment MigrationFragment on Migration {\n revision\n steps {\n type\n __typename\n ... on CreateEnum {\n name\n ce_values: values\n }\n ... on CreateField {\n model\n name\n cf_typeName: typeName\n cf_isRequired: isRequired\n cf_isList: isList\n cf_isUnique: unique\n cf_relation: relation\n cf_defaultValue: default\n cf_enum: enum\n }\n ... on CreateModel {\n name\n }\n ... on CreateRelation {\n name\n leftModel\n rightModel\n }\n ... on DeleteEnum {\n name\n }\n ... on DeleteField {\n model\n name\n }\n ... on DeleteModel {\n name\n }\n ... on DeleteRelation {\n name\n }\n ... on UpdateEnum {\n name\n newName\n values\n }\n ... on UpdateField {\n model\n name\n newName\n typeName\n isRequired\n isList\n isUnique: unique\n relation\n default\n enum\n }\n ... on UpdateModel {\n name\n um_newName: newName\n }\n }\n}\n\n ","variables":"{\"name\":\"default\",\"stage\":\"default\",\"types\":\"type User {\\n id: ID! @unique\\n name: String!\\n}\\n\\n\",\"secrets\":null,\"subscriptions\":[]}","code":"0","stack_trace":"akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)\\n akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)\\n akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)\\n scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)\\n scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)\\n scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)\\n scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)\\n akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)\\n akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)\\n akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)\\n akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)\\n java.lang.Thread.run(Thread.java:748)","message":"Ask timed out on [Actor[akka://single-server/user/$m#2112742780]] after [300000 ms]. Sender[null] sent message of type \"com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule\"."}}
prisma_1 | [Bugsnag - local / testing] Error report: com.bugsnag.Report@6c5a0951
prisma_1 | akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#2112742780]] after [300000 ms]. Sender[null] sent message of type "com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule".
prisma_1 | at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)
prisma_1 | at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)
prisma_1 | at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)
prisma_1 | at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)
prisma_1 | at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
prisma_1 | at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)
prisma_1 | at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
mysql_1 | Initializing database
mysql_1 | 2018-05-11T13:21:55.946958Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
mysql_1 | 2018-05-11T13:21:56.862144Z 0 [Warning] InnoDB: New log files created, LSN=45790
mysql_1 | 2018-05-11T13:21:56.975520Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
mysql_1 | 2018-05-11T13:21:57.045139Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 476c2adf-551e-11e8-8071-0242ac140002.
mysql_1 | 2018-05-11T13:21:57.052206Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
mysql_1 | 2018-05-11T13:21:57.053422Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
mysql_1 | 2018-05-11T13:21:58.129422Z 1 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:21:58.129703Z 1 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:21:58.129862Z 1 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:21:58.130078Z 1 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:21:58.130196Z 1 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:21:58.130330Z 1 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:21:58.130597Z 1 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:21:58.130841Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | Database initialized
mysql_1 | Initializing certificates
mysql_1 | Generating a 2048 bit RSA private key
mysql_1 | .......................................................................................+++
mysql_1 | ...............................................................................................................................................................................................................................................................................................................................+++
mysql_1 | unable to write 'random state'
mysql_1 | writing new private key to 'ca-key.pem'
mysql_1 | -----
mysql_1 | Generating a 2048 bit RSA private key
mysql_1 | ..+++
mysql_1 | ......................................................+++
mysql_1 | unable to write 'random state'
mysql_1 | writing new private key to 'server-key.pem'
mysql_1 | -----
mysql_1 | Generating a 2048 bit RSA private key
mysql_1 | ......................................+++
mysql_1 | ..+++
mysql_1 | unable to write 'random state'
mysql_1 | writing new private key to 'client-key.pem'
mysql_1 | -----
mysql_1 | Certificates initialized
mysql_1 | MySQL init process in progress...
mysql_1 | 2018-05-11T13:22:01.414574Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
mysql_1 | 2018-05-11T13:22:01.415766Z 0 [Note] mysqld (mysqld 5.7.22) starting as process 88 ...
mysql_1 | 2018-05-11T13:22:01.418331Z 0 [Note] InnoDB: PUNCH HOLE support available
mysql_1 | 2018-05-11T13:22:01.418409Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
mysql_1 | 2018-05-11T13:22:01.418439Z 0 [Note] InnoDB: Uses event mutexes
mysql_1 | 2018-05-11T13:22:01.418514Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
mysql_1 | 2018-05-11T13:22:01.418656Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
mysql_1 | 2018-05-11T13:22:01.418746Z 0 [Note] InnoDB: Using Linux native AIO
mysql_1 | 2018-05-11T13:22:01.418996Z 0 [Note] InnoDB: Number of pools: 1
mysql_1 | 2018-05-11T13:22:01.419169Z 0 [Note] InnoDB: Using CPU crc32 instructions
mysql_1 | 2018-05-11T13:22:01.420908Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
mysql_1 | 2018-05-11T13:22:01.431712Z 0 [Note] InnoDB: Completed initialization of buffer pool
mysql_1 | 2018-05-11T13:22:01.434166Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
mysql_1 | 2018-05-11T13:22:01.446221Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
mysql_1 | 2018-05-11T13:22:01.456981Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql_1 | 2018-05-11T13:22:01.457276Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql_1 | 2018-05-11T13:22:01.508226Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql_1 | 2018-05-11T13:22:01.509476Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
prisma_1 | {"@timestamp":"2018-05-11T13:27:39.049+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zrm1k00040799zacv109d\",\"payload\":{\"request_duration\":\"300497\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-26","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:28:28.643+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zz46a0006079929blo5vc\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-34","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:28:28.671+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zz46a0006079929blo5vc\",\"payload\":{\"request_duration\":\"28\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-35","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:28:28.702+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zz47y00070799jl9n5rvp\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-35","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:28:28.720+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zz47y00070799jl9n5rvp\",\"payload\":{\"request_duration\":\"17\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-31","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:28:28.752+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zz49b00080799b8n0lzh8\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-31","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | Warning: Management API authentication is disabled. To protect your management server you should provide one (not both) of the environment variables 'CLUSTER_PUBLIC_KEY' (asymmetric, deprecated soon) or 'PRISMA_MANAGEMENT_API_JWT_SECRET' (symmetric JWT).
prisma_1 | {"@timestamp":"2018-05-11T13:28:28.806+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zz49b00080799b8n0lzh8\",\"payload\":{\"request_duration\":\"55\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-35","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | {"@timestamp":"2018-05-11T13:28:28.825+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1zz4bd00090799odsw4owb\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-31","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
prisma_1 | Warning: Management API authentication is disabled. To protect your management server you should provide one (not both) of the environment variables 'CLUSTER_PUBLIC_KEY' (asymmetric, deprecated soon) or 'PRISMA_MANAGEMENT_API_JWT_SECRET' (symmetric JWT).
prisma_1 | akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#2112742780]] after [300000 ms]. Sender[null] sent message of type "com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule".
prisma_1 | at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)
prisma_1 | at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)
prisma_1 | at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)
prisma_1 | at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)
prisma_1 | at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
prisma_1 | at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)
prisma_1 | at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
prisma_1 | {"key":"error/unhandled","requestId":"local:management:cjh1zz4bd00090799odsw4owb","payload":{"exception":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#2112742780]] after [300000 ms]. Sender[null] sent message of type \"com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule\".","query":" mutation($name: String!, $stage: String! $types: String! $dryRun: Boolean $secrets: [String!], $subscriptions: [FunctionInput!], $force: Boolean) {\n deploy(input: {\n name: $name\n stage: $stage\n types: $types\n dryRun: $dryRun\n secrets: $secrets\n subscriptions: $subscriptions\n force: $force\n }) {\n errors {\n type\n field\n description\n }\n warnings {\n type\n field\n description\n }\n migration {\n ...MigrationFragment\n }\n }\n }\n \nfragment MigrationFragment on Migration {\n revision\n steps {\n type\n __typename\n ... on CreateEnum {\n name\n ce_values: values\n }\n ... on CreateField {\n model\n name\n cf_typeName: typeName\n cf_isRequired: isRequired\n cf_isList: isList\n cf_isUnique: unique\n cf_relation: relation\n cf_defaultValue: default\n cf_enum: enum\n }\n ... on CreateModel {\n name\n }\n ... on CreateRelation {\n name\n leftModel\n rightModel\n }\n ... on DeleteEnum {\n name\n }\n ... on DeleteField {\n model\n name\n }\n ... on DeleteModel {\n name\n }\n ... on DeleteRelation {\n name\n }\n ... on UpdateEnum {\n name\n newName\n values\n }\n ... on UpdateField {\n model\n name\n newName\n typeName\n isRequired\n isList\n isUnique: unique\n relation\n default\n enum\n }\n ... on UpdateModel {\n name\n um_newName: newName\n }\n }\n}\n\n ","variables":"{\"name\":\"default\",\"stage\":\"default\",\"types\":\"type User {\\n id: ID! @unique\\n name: String!\\n}\\n\\n\",\"secrets\":null,\"subscriptions\":[]}","code":"0","stack_trace":"akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)\\n akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)\\n akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)\\n scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)\\n scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)\\n scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)\\n scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)\\n akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)\\n akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)\\n akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)\\n akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)\\n java.lang.Thread.run(Thread.java:748)","message":"Ask timed out on [Actor[akka://single-server/user/$m#2112742780]] after [300000 ms]. Sender[null] sent message of type \"com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule\"."}}
prisma_1 | [Bugsnag - local / testing] Error report: com.bugsnag.Report@6a2d4015
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
prisma_1 | akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#2112742780]] after [300000 ms]. Sender[null] sent message of type "com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule".
prisma_1 | at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)
prisma_1 | at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)
prisma_1 | at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)
prisma_1 | at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)
prisma_1 | at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
prisma_1 | at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)
prisma_1 | at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
prisma_1 | at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
prisma_1 | {"@timestamp":"2018-05-11T13:33:28.973+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local:management:cjh1zz4bd00090799odsw4owb\",\"payload\":{\"request_duration\":\"300146\"}}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-40","level":"INFO","level_value":20000,"HOSTNAME":"196e6d8826d3"}
mysql_1 | 2018-05-11T13:22:01.509613Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
mysql_1 | 2018-05-11T13:22:01.510854Z 0 [Note] InnoDB: 5.7.22 started; log sequence number 2589156
mysql_1 | 2018-05-11T13:22:01.511267Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql_1 | 2018-05-11T13:22:01.511782Z 0 [Note] Plugin 'FEDERATED' is disabled.
mysql_1 | 2018-05-11T13:22:01.512843Z 0 [Note] InnoDB: Buffer pool(s) load completed at 180511 13:22:01
mysql_1 | 2018-05-11T13:22:01.516289Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
mysql_1 | 2018-05-11T13:22:01.516754Z 0 [Warning] CA certificate ca.pem is self signed.
mysql_1 | 2018-05-11T13:22:01.521534Z 0 [Warning] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
mysql_1 | 2018-05-11T13:22:01.523282Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:01.523564Z 0 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:01.523741Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:01.523914Z 0 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:01.524120Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:01.524274Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:01.526296Z 0 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:01.526569Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:01.537780Z 0 [Note] Event Scheduler: Loaded 0 events
mysql_1 | 2018-05-11T13:22:01.538498Z 0 [Note] mysqld: ready for connections.
mysql_1 | Version: '5.7.22' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server (GPL)
mysql_1 | Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
mysql_1 | Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
mysql_1 | Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
mysql_1 | Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
mysql_1 |
mysql_1 | 2018-05-11T13:22:05.856651Z 4 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:05.856780Z 4 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:05.856841Z 4 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:05.856914Z 4 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:05.856959Z 4 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:05.857010Z 4 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:05.857117Z 4 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:05.857167Z 4 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:05.860012Z 0 [Note] Giving 0 client threads a chance to die gracefully
mysql_1 | 2018-05-11T13:22:05.860121Z 0 [Note] Shutting down slave threads
mysql_1 | 2018-05-11T13:22:05.860176Z 0 [Note] Forcefully disconnecting 0 remaining clients
mysql_1 | 2018-05-11T13:22:05.860211Z 0 [Note] Event Scheduler: Purging the queue. 0 events
mysql_1 | 2018-05-11T13:22:05.860335Z 0 [Note] Binlog end
mysql_1 | 2018-05-11T13:22:05.861062Z 0 [Note] Shutting down plugin 'ngram'
mysql_1 | 2018-05-11T13:22:05.861159Z 0 [Note] Shutting down plugin 'partition'
mysql_1 | 2018-05-11T13:22:05.861209Z 0 [Note] Shutting down plugin 'BLACKHOLE'
mysql_1 | 2018-05-11T13:22:05.861251Z 0 [Note] Shutting down plugin 'ARCHIVE'
mysql_1 | 2018-05-11T13:22:05.861294Z 0 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
mysql_1 | 2018-05-11T13:22:05.861362Z 0 [Note] Shutting down plugin 'MRG_MYISAM'
mysql_1 | 2018-05-11T13:22:05.861402Z 0 [Note] Shutting down plugin 'MyISAM'
mysql_1 | 2018-05-11T13:22:05.861447Z 0 [Note] Shutting down plugin 'INNODB_SYS_VIRTUAL'
mysql_1 | 2018-05-11T13:22:05.861484Z 0 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
mysql_1 | 2018-05-11T13:22:05.861520Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
mysql_1 | 2018-05-11T13:22:05.861555Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
mysql_1 | 2018-05-11T13:22:05.861590Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
mysql_1 | 2018-05-11T13:22:05.861624Z 0 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
mysql_1 | 2018-05-11T13:22:05.861657Z 0 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
mysql_1 | 2018-05-11T13:22:05.861692Z 0 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
mysql_1 | 2018-05-11T13:22:05.861727Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
mysql_1 | 2018-05-11T13:22:05.861760Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
mysql_1 | 2018-05-11T13:22:05.861791Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
mysql_1 | 2018-05-11T13:22:05.861825Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
mysql_1 | 2018-05-11T13:22:05.861860Z 0 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
mysql_1 | 2018-05-11T13:22:05.861895Z 0 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
mysql_1 | 2018-05-11T13:22:05.861935Z 0 [Note] Shutting down plugin 'INNODB_FT_DELETED'
mysql_1 | 2018-05-11T13:22:05.861969Z 0 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
mysql_1 | 2018-05-11T13:22:05.862003Z 0 [Note] Shutting down plugin 'INNODB_METRICS'
mysql_1 | 2018-05-11T13:22:05.862037Z 0 [Note] Shutting down plugin 'INNODB_TEMP_TABLE_INFO'
mysql_1 | 2018-05-11T13:22:05.862073Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
mysql_1 | 2018-05-11T13:22:05.862108Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
mysql_1 | 2018-05-11T13:22:05.862141Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
mysql_1 | 2018-05-11T13:22:05.862174Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
mysql_1 | 2018-05-11T13:22:05.862205Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
mysql_1 | 2018-05-11T13:22:05.862241Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
mysql_1 | 2018-05-11T13:22:05.862278Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM'
mysql_1 | 2018-05-11T13:22:05.862313Z 0 [Note] Shutting down plugin 'INNODB_CMP_RESET'
mysql_1 | 2018-05-11T13:22:05.862349Z 0 [Note] Shutting down plugin 'INNODB_CMP'
mysql_1 | 2018-05-11T13:22:05.862382Z 0 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
mysql_1 | 2018-05-11T13:22:05.862417Z 0 [Note] Shutting down plugin 'INNODB_LOCKS'
mysql_1 | 2018-05-11T13:22:05.862454Z 0 [Note] Shutting down plugin 'INNODB_TRX'
mysql_1 | 2018-05-11T13:22:05.862493Z 0 [Note] Shutting down plugin 'InnoDB'
mysql_1 | 2018-05-11T13:22:05.863636Z 0 [Note] InnoDB: FTS optimize thread exiting.
mysql_1 | 2018-05-11T13:22:05.864289Z 0 [Note] InnoDB: Starting shutdown...
mysql_1 | 2018-05-11T13:22:05.964806Z 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
mysql_1 | 2018-05-11T13:22:05.965202Z 0 [Note] InnoDB: Buffer pool(s) dump completed at 180511 13:22:05
mysql_1 | 2018-05-11T13:22:07.389653Z 0 [Note] InnoDB: Shutdown completed; log sequence number 12358473
mysql_1 | 2018-05-11T13:22:07.393096Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
mysql_1 | 2018-05-11T13:22:07.393370Z 0 [Note] Shutting down plugin 'MEMORY'
mysql_1 | 2018-05-11T13:22:07.393419Z 0 [Note] Shutting down plugin 'CSV'
mysql_1 | 2018-05-11T13:22:07.393520Z 0 [Note] Shutting down plugin 'sha256_password'
mysql_1 | 2018-05-11T13:22:07.393650Z 0 [Note] Shutting down plugin 'mysql_native_password'
mysql_1 | 2018-05-11T13:22:07.393856Z 0 [Note] Shutting down plugin 'binlog'
mysql_1 | 2018-05-11T13:22:07.396615Z 0 [Note] mysqld: Shutdown complete
mysql_1 |
mysql_1 |
mysql_1 | MySQL init process done. Ready for start up.
mysql_1 |
mysql_1 | 2018-05-11T13:22:07.785777Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
mysql_1 | 2018-05-11T13:22:07.789893Z 0 [Note] mysqld (mysqld 5.7.22) starting as process 1 ...
mysql_1 | 2018-05-11T13:22:07.795466Z 0 [Note] InnoDB: PUNCH HOLE support available
mysql_1 | 2018-05-11T13:22:07.795618Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
mysql_1 | 2018-05-11T13:22:07.795802Z 0 [Note] InnoDB: Uses event mutexes
mysql_1 | 2018-05-11T13:22:07.795998Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
mysql_1 | 2018-05-11T13:22:07.796069Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
mysql_1 | 2018-05-11T13:22:07.796114Z 0 [Note] InnoDB: Using Linux native AIO
mysql_1 | 2018-05-11T13:22:07.796661Z 0 [Note] InnoDB: Number of pools: 1
mysql_1 | 2018-05-11T13:22:07.797021Z 0 [Note] InnoDB: Using CPU crc32 instructions
mysql_1 | 2018-05-11T13:22:07.799128Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
mysql_1 | 2018-05-11T13:22:07.825688Z 0 [Note] InnoDB: Completed initialization of buffer pool
mysql_1 | 2018-05-11T13:22:07.828327Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
mysql_1 | 2018-05-11T13:22:07.842913Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
mysql_1 | 2018-05-11T13:22:07.861892Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
mysql_1 | 2018-05-11T13:22:07.862088Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
mysql_1 | 2018-05-11T13:22:07.972640Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
mysql_1 | 2018-05-11T13:22:07.975170Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
mysql_1 | 2018-05-11T13:22:07.975459Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
mysql_1 | 2018-05-11T13:22:07.976486Z 0 [Note] InnoDB: Waiting for purge to start
mysql_1 | 2018-05-11T13:22:08.186072Z 0 [Note] InnoDB: 5.7.22 started; log sequence number 12358473
mysql_1 | 2018-05-11T13:22:08.187019Z 0 [Note] Plugin 'FEDERATED' is disabled.
mysql_1 | 2018-05-11T13:22:08.193182Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
mysql_1 | 2018-05-11T13:22:08.195462Z 0 [Warning] CA certificate ca.pem is self signed.
mysql_1 | 2018-05-11T13:22:08.195178Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
mysql_1 | 2018-05-11T13:22:08.200430Z 0 [Note] InnoDB: Buffer pool(s) load completed at 180511 13:22:08
mysql_1 | 2018-05-11T13:22:08.203965Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
mysql_1 | 2018-05-11T13:22:08.206158Z 0 [Note] IPv6 is available.
mysql_1 | 2018-05-11T13:22:08.207564Z 0 [Note] - '::' resolves to '::';
mysql_1 | 2018-05-11T13:22:08.208119Z 0 [Note] Server socket created on IP: '::'.
mysql_1 | 2018-05-11T13:22:08.218704Z 0 [Warning] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
mysql_1 | 2018-05-11T13:22:08.221027Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:08.221709Z 0 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:08.221869Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:08.222030Z 0 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:08.222206Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:08.222371Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:08.238259Z 0 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:08.238643Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql_1 | 2018-05-11T13:22:08.250725Z 0 [Note] Event Scheduler: Loaded 0 events
mysql_1 | 2018-05-11T13:22:08.254427Z 0 [Note] mysqld: ready for connections.
mysql_1 | Version: '5.7.22' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
Expected behavior?
The deployment should succeed
I'm not sure whether the error is on my side or this is an actual bug.
Hey @dferber90 what's the version of the prismagraphql/prisma
docker image you're running?
Can you query the commit hash from the /management
API and share it?
How does your docker-compose.yml
look like?
How does your
docker-compose.yml
look like?
β cat docker-compose.yml
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.8
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: mysql
host: mysql
port: 3306
user: root
password: prisma
migrations: true
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: prisma
volumes:
- mysql:/var/lib/mysql
volumes:
mysql:
Can you query the commit hash from the /management API and share it?
Upon opening http://localhost:4466/
I immediately see an error in the results panel:
{
"error": "Mutation fields must be an object with field names as keys or a function which returns such an object."
}
and the Server cannot be reached
message is present, see here
The management API is at http://localhost:4466/management
π
Ah, right. Sorry, I've been trying for a few hours and my brain is slowly shutting down :P
{
"data": {
"listProjects": [
{
"name": "default",
"stage": "default"
}
],
"serverInfo": {
"version": "1.8.2",
"commit": "95f27d07b941a79ea31788817167d43546128d73"
},
"clusterInfo": {
"version": "1.8.2",
"commit": "95f27d07b941a79ea31788817167d43546128d73"
}
}
}
I'd be happy to screen-share if you're around (https://appear.in/graphql-come-on). I'll wait there for the next 10mins in case you want to join.
Thanks, that does look like a bug! π
I followed the same steps and had a similar error.
SQL logs look clean.
docker prisma log has the following which I don't know if this helps to diagnose the issue?
{"@timestamp":"2018-05-11T12:55:13.747+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh1yscwj00040a19ymvuj9eq\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-8","level":"INFO","level_value":20000,"HOSTNAME":"a856aecf858c"}
Warning: Management API authentication is disabled. To protect your management server you should provide one (not both) of the environment variables 'CLUSTER_PUBLIC_KEY' (asymmetric, deprecated soon) or 'PRISMA_MANAGEMENT_API_JWT_SECRET' (symmetric JWT).
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#715999029]] after [300000 ms]. Sender[null] sent message of type "com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule".
at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)
at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)
at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)
at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
{"key":"error/unhandled","requestId":"local:management:cjh1yscwj00040a19ymvuj9eq","payload":{"exception":"akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#715999029]] after [300000 ms]. Sender[null] sent message of type \"com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule\".","query":" mutation($name: String!, $stage: String! $types: String! $dryRun: Boolean $secrets: [String!], $subscriptions: [FunctionInput!], $force: Boolean) {\n deploy(input: {\n name: $name\n stage: $stage\n types: $types\n dryRun: $dryRun\n secrets: $secrets\n subscriptions: $subscriptions\n force: $force\n }) {\n errors {\n type\n field\n description\n }\n warnings {\n type\n field\n description\n }\n migration {\n ...MigrationFragment\n }\n }\n }\n \nfragment MigrationFragment on Migration {\n revision\n steps {\n type\n __typename\n ... on CreateEnum {\n name\n ce_values: values\n }\n ... on CreateField {\n model\n name\n cf_typeName: typeName\n cf_isRequired: isRequired\n cf_isList: isList\n cf_isUnique: unique\n cf_relation: relation\n cf_defaultValue: default\n cf_enum: enum\n }\n ... on CreateModel {\n name\n }\n ... on CreateRelation {\n name\n leftModel\n rightModel\n }\n ... on DeleteEnum {\n name\n }\n ... on DeleteField {\n model\n name\n }\n ... on DeleteModel {\n name\n }\n ... on DeleteRelation {\n name\n }\n ... on UpdateEnum {\n name\n newName\n values\n }\n ... on UpdateField {\n model\n name\n newName\n typeName\n isRequired\n isList\n isUnique: unique\n relation\n default\n enum\n }\n ... on UpdateModel {\n name\n um_newName: newName\n }\n }\n}\n\n ","variables":"{\"name\":\"default\",\"stage\":\"default\",\"types\":\"type User {\n id: ID! @unique\n name: String!\n}\n\n\",\"secrets\":null,\"subscriptions\":[]}","code":"0","stack_trace":"akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)\n akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)\n akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)\n scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)\n scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)\n scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)\n scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)\n akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)\n akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)\n akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)\n akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)\n java.lang.Thread.run(Thread.java:748)","message":"Ask timed out on [Actor[akka://single-server/user/$m#715999029]] after [300000 ms]. Sender[null] sent message of type \"com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule\"."}}
any ideas on how to work around?
This works if you revert the docker image of prisma to 1.7 and the npm version to 1.7.4
Thanks for sharing that @vantreeseba.
We are looking into this problem soon!
Interesting enough, it looks like when it queries the management it's not fetching the correct project, but in the management console it gives this as the project info.
{
"data": {
"listProjects": [
{
"name": "default",
"stage": "default"
}
],
"serverInfo": {
"version": "1.8.2",
"commit": "95f27d07b941a79ea31788817167d43546128d73"
},
"project": {
"name": "default",
"stage": "default"
},
"clusterInfo": {
"version": "1.8.2",
"commit": "95f27d07b941a79ea31788817167d43546128d73"
}
}
}
The error in the docker container is this
{"key":"error/handled",
"requestId":"local:management:cjh2a9fba00020837xp78ovue",
"payload":{
"exception":"com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found","query":"\n query($name: String! $stage: String!)
This is with
prisma 1.8
prisma docker 1.8.
Node 8.11
docker 17.12.0-ce
Ubuntu
@marktani Saw you put mysql on this, this was happening with postgres.
I put MySQL on this because of the initial report, thanks for your clarification! This seems to affect both connectors.
I am also able to replicate this with MySQL as the connector.
docker-compose.yml
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.8
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: mysql
host: mysql
port: 3306
user: root
password: prisma
migrations: true
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: prisma
volumes:
- mysql:/var/lib/mysql
volumes:
mysql:
query on localhost:4466/management
{
"data": {
"listProjects": [
{
"name": "default",
"stage": "default"
}
],
"serverInfo": {
"version": "1.8.2",
"commit": "95f27d07b941a79ea31788817167d43546128d73"
},
"clusterInfo": {
"version": "1.8.2",
"commit": "95f27d07b941a79ea31788817167d43546128d73"
}
}
}
docker logs database_prisma_1
{"@timestamp":"2018-05-11T19:11:20.968+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:management:cjh2c81vr00040796vj1qvqlu\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-21","level":"INFO","level_value":20000,"HOSTNAME":"12bd247e675b"}
Warning: Management API authentication is disabled. To protect your management server you should provide one (not both) of the environment variables 'CLUSTER_PUBLIC_KEY' (asymmetric, deprecated soon) or 'PRISMA_MANAGEMENT_API_JWT_SECRET' (symmetric JWT).
Docker logs for the specific timeout request.
docker logs backend_prisma_1 | grep local:management:cjh2cthzt000407474yh0ehem
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#-1978467491]] after [300000 ms]. S
ender[null] sent message of type "com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule".
at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)
at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)
at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)
at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
{"@timestamp":"2018-05-11T19:28:01.625+00:00","@version":1,"message":"{\"key\":\"request/new\",\"requestId\":\"local:mana
gement:cjh2cthzt000407474yh0ehem\"}","logger_name":"com.prisma.deploy.server.ManagementServer","thread_name":"single-ser$
erTelegramBot*Zefault-dispatcher-2","level":"INFO","level_value":20000,"HOSTNAME":"9db9fe2fda1a"}
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#-1978467491]] after [300000 ms]. $
ender[null] sent message of type "com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule".
at akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskSupport.scala:595)
at akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)
at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:140)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)
at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)
at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:103) [14/359]
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
{"key":"error/unhandled","requestId":"local:management:cjh2cthzt000407474yh0ehem","payload":{"exception":"akka.pattern.A$
kTimeoutException: Ask timed out on [Actor[akka://single-server/user/$m#-1978467491]] after [300000 ms]. Sender[null] se$
t message of type \"com.prisma.deploy.migration.migrator.DeploymentProtocol$Schedule\".","query":" mutation($name: $
tring!, $stage: String! $types: String! $dryRun: Boolean $secrets: [String!], $subscriptions: [FunctionInput!], $force: $
oolean) {\n deploy(input: {\n name: $name\n stage: $stage\n types: $types\n d$
yRun: $dryRun\n secrets: $secrets\n subscriptions: $subscriptions\n force: $force\n })
{\n errors {\n type\n field\n description\n }\n warnings {\n
type\n field\n description\n }\n migration {\n ...Migratio$
Fragment\n }\n }\n }\n \nfragment MigrationFragment on Migration {\n revision\n steps {\n
type\n __typename\n ... on CreateEnum {\n name\n ce_values: values\n }\n ... on CreateField {\n
model\n name\n cf_typeName: typeName\n cf_isRequired: isRequired\n cf_isList: isList\n cf_isU$
ique: unique\n cf_relation: relation\n cf_defaultValue: default\n cf_enum: enum\n }\n ... on Create$
odel {\n name\n }\n ... on CreateRelation {\n name\n leftModel\n rightModel\n }\n ... on
DeleteEnum {\n name\n }\n ... on DeleteField {\n model\n name\n }\n ... on DeleteModel {\n
name\n }\n ... on DeleteRelation {\n name\n }\n ... on UpdateEnum {\n name\n newName\n
values\n }\n ... on UpdateField {\n model\n name\n newName\n typeName\n isRequired\n
isList\n isUnique: unique\n relation\n default\n enum\n }\n ... on UpdateModel {\n name\n
um_newName: newName\n }\n }\n}\n\n ","variables":"{\"name\":\"default\",\"stage\":\"default\",\"types\":\"typ
e TelegramBot*Z: ID! @unique\\n text: String!\\n user: User!\\n list: TodoList\\n}\\n\\ntype User {\\n id: ID! @uniqu
e\\n name: String!\\n slug: String!\\n}\\n\\nenum TodoList {\\n ONE\\n NEXT\\n DEFERRED\\n DONE\\n}\\n\\n\",\"secre
ts\":null,\"subscriptions\":[]}","code":"0","stack_trace":"akka.pattern.PromiseActorRef$.$anonfun$defaultOnTimeout$1(AskS
upport.scala:595)\\n akka.pattern.PromiseActorRef$.$anonfun$apply$1(AskSupport.scala:605)\\n akka.actor.Scheduler$$anon$4
.run(Scheduler.scala:140)\\n scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:864)\\n scal
a.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:109)\\n scala.concurrent.BatchingExecutor.execute$(BatchingE
xecutor.scala:103)\\n scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:862)\\n akka.actor.LightArra
yRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)\\n akka.actor.LightArrayRevolverSchedule
r$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)\\n akka.actor.LightArrayRevolverScheduler$$anon$4.nextTi
ck(LightArrayRevolverScheduler.scala:283)\\n akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverSchedul
er.scala:235)\\n java.lang.Thread.run(Thread.java:748)","message":"Ask timed out on [Actor[akka://single-server/user/$m#-
1978467491]] after [300000 ms]. Sender[null] sent message of type \"com.prisma.deploy.migration.migrator.DeploymentProtoc
ol$Schedule\"."}}
at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
{"@timestamp":"2018-05-11T19:33:02.218+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"local
:management:cjh2cthzt000407474yh0ehem\",\"payload\":{\"request_duration\":\"300592\"}}","logger_name":"com.prisma.deploy.
server.ManagementServer","thread_name":"single-server-akka.actor.default-dispatcher-24","level":"INFO","level_value":2000
0,"HOSTNAME":"9db9fe2fda1a"}
I can confirmed that this works fine on shared cluster and only breaks on local cluster. Albeit, cluster versions are same
divyendusingh [backend]$ prisma cluster list
name version endpoint
ββββββββββ βββββββ ββββββββββββββββββββββββββββββββ
local 1.8.2 http://localhost:4466/management
prisma-eu1 1.8.2 https://eu1.prisma.sh/management
prisma-us1 1.8.2 https://us1.prisma.sh/management
I can also confirm that this works fine with cluster version 1.8.1
Following changes to docker-compose.yml
is a workaround.
3 prisma:
4 image: prismagraphql/prisma:1.8.1
The breaking change is in 1.8.2
cluster version.
Diff for reference.
The change that I feel is the cause
Also, this only breaks for the 1st deployment. If something is already deployed, it works fine. So, moving to 1.8.1
deploying the service and upgrading to 1.8.2
is a valid workaround if you need to be at 1.8.2
for any reason.
The 1.8.2 image is hotfixed. Confirm and close as approriate @marktani. Please docker pull prismagraphql/prisma:1.8
(or prisma:1.8.2
) to update the image.
Works for me now π Thanks! I'll close. Feel free to reopen in case there is more to it.
I have tried 1.8.1, 1.8.2, 1.8.3, 1.9-beta-26 and am seeing this issue still when deploying to an external MySQL database. It creates the prisma database as well as the project database before receiving this error.
Can we re-open this? Happy to provide whatever logs you may need.
Likewise, tried several versions of 1.8.x and 1.9.0 -- Just as @ninemind describes, it connects and creates the DB no problem, then fails with this error message.
This issue should be reopened, whatever the alleged fix was either was insufficient, or there's been a regression.
@dferber90 @dpetrick This is still an issue. Could someone please re-open this?
@ninemind @willvincent @ffxsam sorry for missing your messages. Could you please create a new issue and reference this one? π
Most helpful comment
I can also confirm that this works fine with cluster version
1.8.1
Following changes to
docker-compose.yml
is a workaround.The breaking change is in
1.8.2
cluster version.Diff for reference.
The change that I feel is the cause
Also, this only breaks for the 1st deployment. If something is already deployed, it works fine. So, moving to
1.8.1
deploying the service and upgrading to1.8.2
is a valid workaround if you need to be at1.8.2
for any reason.