Prisma1: Performance issue with nested queries

Created on 13 Mar 2018  ·  18Comments  ·  Source: prisma/prisma1

Bug Report

Current behavior

I am facing some performance issues with prisma queries when trying to get data 4 level deep. Logs for Prisma container shows that it takes anywhere between 5-25 seconds (depending upon traffic) to respond and all my other requests are blocked till that query is resolved.

Container Logs:

{"@timestamp":"2018-03-13T08:00:42.321+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda4nb04rz0106fubesa30\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"9802\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-27","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:43.197+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda4nz04s101064sop1p4y\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"10654\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-32","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:45.509+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda1xl04go0106vkytwycr\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"16508\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-2","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:45.716+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda4ju04rr0106ta5yr2w6\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"13322\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-28","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:47.190+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda2b804ii010625yhyiyi\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"17698\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-4","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:48.913+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda4ym04st0106nbnn33hd\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"15987\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-4","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:49.183+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda4m504ry0106orkpu253\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"16706\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-15","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:50.612+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda4lf04rs0106obg23azz\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"18161\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-29","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:50.707+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda2b704ig01062xygadxd\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"21216\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-2","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}
{"@timestamp":"2018-03-13T08:00:51.740+00:00","@version":1,"message":"{\"key\":\"request/complete\",\"requestId\":\"api:api:cjepda39604m60106zbqfrn1q\",\"projectId\":\"example@dev\",\"payload\":{\"request_duration\":\"21026\",\"throttled_by\":\"0\"}}","logger_name":"com.prisma.api.server.ApiServer","thread_name":"single-server-akka.actor.default-dispatcher-4","level":"INFO","level_value":20000,"HOSTNAME":"fa6e13eba3d6"}

Also, prisma container throws following error when I run load test for above query with concurrency of 2 requests and rate of 10 requests per second:

java.util.concurrent.RejectedExecutionException: Task slick.basic.BasicBackend$DatabaseDef$$anon$2@4fac748b rejected from slick.util.AsyncExecutor$$anon$2$$anon$1@3ce74e20[Running, pool size = 10, active threads = 10, queued tasks = 1000, completed tasks = 85606]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
    at slick.util.AsyncExecutor$$anon$2$$anon$3.execute(AsyncExecutor.scala:120)
    at slick.basic.BasicBackend$DatabaseDef.runSynchronousDatabaseAction(BasicBackend.scala:233)
    at slick.basic.BasicBackend$DatabaseDef.runSynchronousDatabaseAction$(BasicBackend.scala:231)
    at slick.jdbc.JdbcBackend$DatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:38)
    at slick.basic.BasicBackend$DatabaseDef.runInContext(BasicBackend.scala:210)
    at slick.basic.BasicBackend$DatabaseDef.runInContext$(BasicBackend.scala:142)
    at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:38)
    at slick.basic.BasicBackend$DatabaseDef.runInternal(BasicBackend.scala:78)
    at slick.basic.BasicBackend$DatabaseDef.runInternal$(BasicBackend.scala:77)
    at slick.jdbc.JdbcBackend$DatabaseDef.runInternal(JdbcBackend.scala:38)
    at slick.basic.BasicBackend$DatabaseDef.run(BasicBackend.scala:75)
    at slick.basic.BasicBackend$DatabaseDef.run$(BasicBackend.scala:75)
    at slick.jdbc.JdbcBackend$DatabaseDef.run(JdbcBackend.scala:38)
    at com.prisma.api.database.DataResolver.$anonfun$resolveByRelationManyModels$1(DataResolver.scala:172)
    at com.prisma.metrics.TimerMetric.timeFuture(Metrics.scala:110)
    at com.prisma.api.database.DataResolver.performWithTiming(DataResolver.scala:38)
    at com.prisma.api.database.DataResolver.resolveByRelationManyModels(DataResolver.scala:174)
    at com.prisma.api.database.deferreds.ToOneDeferredResolver.resolve(ToOneDeferredResolver.scala:24)
    at com.prisma.api.database.deferreds.DeferredResolverProvider.$anonfun$resolve$6(DeferredResolverProvider.scala:139)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
    at scala.collection.immutable.Map$Map1.foreach(Map.scala:120)
    at scala.collection.TraversableLike.map(TraversableLike.scala:234)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at com.prisma.api.database.deferreds.DeferredResolverProvider.resolve(DeferredResolverProvider.scala:137)
    at sangria.execution.Resolver.sangria$execution$Resolver$$resolveDeferred(Resolver.scala:766)

This issue is blocking us from deploying application to production.

Versions:
prisma: 1.3.2
prisma-binding: 1.5.6
graphql-yoga: 1.2.0

bu0-needs-info statustale

Most helpful comment

Hi @akoenig It would be difficult for me to share data model as it is confidential. But I'll try and create a sample prisma app and reproduce issue.

Meanwhile, can you think of any options we can pass to prisma container which would help in improving performance? I did not find anything relevant in the documentation.

All 18 comments

@raeesaa Could you share the data model and the respective query?

Hi @akoenig It would be difficult for me to share data model as it is confidential. But I'll try and create a sample prisma app and reproduce issue.

Meanwhile, can you think of any options we can pass to prisma container which would help in improving performance? I did not find anything relevant in the documentation.

@raeesaa One option would be to pass the debug: true flag, when you instantiate the Prisma binding. This would be a first step to reveal the requests which hit the Prisma server. You can't see the actual queries though, but it is a first step in gaining some insights.

@akoenig I have tried to replicate existing data model in sample application: https://github.com/raeesaa/resolver-forwarding. I ran load tests on query and following are the results:

  1. Total 1000 requests at the rate of 50 requests per second
loadtest -c 10 --rps 50 -n 1000 --data '{"query":"query{\n  level1s {\n    name\n    description\n    active\n    level2s {\n      name\n      description\n      active\n      level3s {\n        name\n        description\n        active\n      }\n      level4s {\n        name\n        description\n        active\n        level5 {\n          name\n          description\n          active\n          level6 {\n            name\n            description\n            active\n          }\n        }\n      }\n    }\n  }\n}","variables":null}' -m POST -T application/json http://localhost:4000/
[Tue Mar 13 2018 16:32:00 GMT+0530 (IST)] INFO Requests: 0 (0%), requests per second: 0, mean latency: 0 ms
[Tue Mar 13 2018 16:32:05 GMT+0530 (IST)] INFO Requests: 183 (18%), requests per second: 36, mean latency: 442.2 ms
[Tue Mar 13 2018 16:32:10 GMT+0530 (IST)] INFO Requests: 374 (37%), requests per second: 39, mean latency: 1078.7 ms
[Tue Mar 13 2018 16:32:15 GMT+0530 (IST)] INFO Requests: 599 (60%), requests per second: 45, mean latency: 1474.1 ms
[Tue Mar 13 2018 16:32:20 GMT+0530 (IST)] INFO Requests: 796 (80%), requests per second: 39, mean latency: 2020.4 ms
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Target URL:          http://localhost:4000/
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Max requests:        1000
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Concurrency level:   10
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Agent:               none
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Requests per second: 50
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Completed requests:  1000
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Total errors:        0
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Total time:          24.460253669999997 s
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Requests per second: 41
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Mean latency:        1477.2 ms
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO Percentage of the requests served within a certain time
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO   50%      912 ms
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO   90%      2575 ms
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO   95%      8833 ms
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO   99%      16269 ms
[Tue Mar 13 2018 16:32:24 GMT+0530 (IST)] INFO  100%      16833 ms (longest request)

  1. Total 5000 requests at the rate of 50 requests per second
loadtest -c 10 --rps 50 -n 5000 --data '{"query":"query{\n  level1s {\n    name\n    description\n    active\n    level2s {\n      name\n      description\n      active\n      level3s {\n        name\n        description\n        active\n      }\n      level4s {\n        name\n        description\n        active\n        level5 {\n          name\n          description\n          active\n          level6 {\n            name\n            description\n            active\n          }\n        }\n      }\n    }\n  }\n}","variables":null}' -m POST -T application/json http://localhost:4000/
[Tue Mar 13 2018 16:33:15 GMT+0530 (IST)] INFO Requests: 0 (0%), requests per second: 0, mean latency: 0 ms
[Tue Mar 13 2018 16:33:20 GMT+0530 (IST)] INFO Requests: 141 (3%), requests per second: 28, mean latency: 721.7 ms
[Tue Mar 13 2018 16:33:25 GMT+0530 (IST)] INFO Requests: 330 (7%), requests per second: 37, mean latency: 1311.6 ms
[Tue Mar 13 2018 16:33:30 GMT+0530 (IST)] INFO Requests: 562 (11%), requests per second: 47, mean latency: 2107.1 ms
[Tue Mar 13 2018 16:33:35 GMT+0530 (IST)] INFO Requests: 786 (16%), requests per second: 45, mean latency: 1051.3 ms
[Tue Mar 13 2018 16:33:40 GMT+0530 (IST)] INFO Requests: 1064 (21%), requests per second: 56, mean latency: 3492.9 ms
[Tue Mar 13 2018 16:33:45 GMT+0530 (IST)] INFO Requests: 1318 (26%), requests per second: 51, mean latency: 2480.7 ms
[Tue Mar 13 2018 16:33:50 GMT+0530 (IST)] INFO Requests: 1591 (32%), requests per second: 55, mean latency: 2638.2 ms
[Tue Mar 13 2018 16:33:55 GMT+0530 (IST)] INFO Requests: 1835 (37%), requests per second: 49, mean latency: 3185.7 ms
[Tue Mar 13 2018 16:34:00 GMT+0530 (IST)] INFO Requests: 2050 (41%), requests per second: 43, mean latency: 3674.3 ms
[Tue Mar 13 2018 16:34:05 GMT+0530 (IST)] INFO Requests: 2237 (45%), requests per second: 37, mean latency: 4011.9 ms
[Tue Mar 13 2018 16:34:10 GMT+0530 (IST)] INFO Requests: 2336 (47%), requests per second: 20, mean latency: 9378.1 ms
[Tue Mar 13 2018 16:34:10 GMT+0530 (IST)] INFO Errors: 2, accumulated errors: 2, 0.1% of total requests
[Tue Mar 13 2018 16:34:15 GMT+0530 (IST)] INFO Requests: 2402 (48%), requests per second: 13, mean latency: 12911.7 ms
[Tue Mar 13 2018 16:34:15 GMT+0530 (IST)] INFO Errors: 1, accumulated errors: 3, 0.1% of total requests
[Tue Mar 13 2018 16:34:20 GMT+0530 (IST)] INFO Requests: 2605 (52%), requests per second: 41, mean latency: 10990.1 ms
[Tue Mar 13 2018 16:34:20 GMT+0530 (IST)] INFO Errors: 0, accumulated errors: 3, 0.1% of total requests
[Tue Mar 13 2018 16:34:25 GMT+0530 (IST)] INFO Requests: 2749 (55%), requests per second: 29, mean latency: 13975.1 ms
[Tue Mar 13 2018 16:34:25 GMT+0530 (IST)] INFO Errors: 0, accumulated errors: 3, 0.1% of total requests
[Tue Mar 13 2018 16:34:30 GMT+0530 (IST)] INFO Requests: 3038 (61%), requests per second: 58, mean latency: 15066.4 ms
[Tue Mar 13 2018 16:34:30 GMT+0530 (IST)] INFO Errors: 0, accumulated errors: 3, 0.1% of total requests
[Tue Mar 13 2018 16:34:35 GMT+0530 (IST)] INFO Requests: 3192 (64%), requests per second: 31, mean latency: 13512.9 ms
[Tue Mar 13 2018 16:34:35 GMT+0530 (IST)] INFO Errors: 0, accumulated errors: 3, 0.1% of total requests
[Tue Mar 13 2018 16:34:40 GMT+0530 (IST)] INFO Requests: 3426 (69%), requests per second: 47, mean latency: 15722.1 ms
[Tue Mar 13 2018 16:34:40 GMT+0530 (IST)] INFO Errors: 2, accumulated errors: 5, 0.1% of total requests
[Tue Mar 13 2018 16:34:45 GMT+0530 (IST)] INFO Requests: 3698 (74%), requests per second: 54, mean latency: 16669.6 ms
[Tue Mar 13 2018 16:34:45 GMT+0530 (IST)] INFO Errors: 2, accumulated errors: 7, 0.2% of total requests
[Tue Mar 13 2018 16:34:50 GMT+0530 (IST)] INFO Requests: 3829 (77%), requests per second: 26, mean latency: 18945.6 ms
[Tue Mar 13 2018 16:34:50 GMT+0530 (IST)] INFO Errors: 0, accumulated errors: 7, 0.2% of total requests
[Tue Mar 13 2018 16:34:55 GMT+0530 (IST)] INFO Requests: 3995 (80%), requests per second: 33, mean latency: 14662.1 ms
[Tue Mar 13 2018 16:34:55 GMT+0530 (IST)] INFO Errors: 1, accumulated errors: 8, 0.2% of total requests
[Tue Mar 13 2018 16:35:00 GMT+0530 (IST)] INFO Requests: 4035 (81%), requests per second: 8, mean latency: 23623.6 ms
[Tue Mar 13 2018 16:35:00 GMT+0530 (IST)] INFO Errors: 2, accumulated errors: 10, 0.2% of total requests
[Tue Mar 13 2018 16:35:05 GMT+0530 (IST)] INFO Requests: 4378 (88%), requests per second: 69, mean latency: 18298.6 ms
[Tue Mar 13 2018 16:35:05 GMT+0530 (IST)] INFO Errors: 7, accumulated errors: 17, 0.4% of total requests
[Tue Mar 13 2018 16:35:10 GMT+0530 (IST)] INFO Requests: 4844 (97%), requests per second: 93, mean latency: 19952.9 ms
[Tue Mar 13 2018 16:35:10 GMT+0530 (IST)] INFO Errors: 14, accumulated errors: 31, 0.6% of total requests
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Target URL:          http://localhost:4000/
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Max requests:        5000
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Concurrency level:   10
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Agent:               none
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Requests per second: 50
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Completed requests:  5000
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Total errors:        36
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Total time:          118.611827244 s
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Requests per second: 42
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Mean latency:        10276.7 ms
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO Percentage of the requests served within a certain time
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO   50%      10375 ms
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO   90%      24525 ms
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO   95%      33796 ms
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO   99%      38389 ms
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO  100%      63030 ms (longest request)
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO  100%      63030 ms (longest request)
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 16:35:14 GMT+0530 (IST)] INFO    -1:   36 errors

Module using for running load test: https://github.com/alexfernandez/loadtest

The average latency are too large considering the fact that there is no data in database in case of above examples.

Thanks for chiming in @akoenig!

Hi @raeesaa

There is definitely something wrong with your configuration. I just did a quick test based on your schema agains an EC2 t2.small instance:

❯ loadtest -c 10 --rps 50 -n 5000 --data '{"query":"query{\n  level1s {\n    name\n    description\n    active\n    level2s {\n      name\n      description\n      active\n      level3s {\n        name\n        description\n        active\n      }\n      level4s {\n        name\n        description\n        active\n        level5 {\n          name\n          description\n          active\n          level6 {\n            name\n            description\n            active\n          }\n        }\n      }\n    }\n  }\n}","variables":null}' -m POST -T application/json https://aurora_soren-test.prisma.sh/a88/dev
[Tue Mar 13 2018 18:04:30 GMT+0100 (CET)] INFO Requests: 0 (0%), requests per second: 0, mean latency: 0 ms
[Tue Mar 13 2018 18:04:35 GMT+0100 (CET)] INFO Requests: 217 (4%), requests per second: 43, mean latency: 195.8 ms
[Tue Mar 13 2018 18:04:40 GMT+0100 (CET)] INFO Requests: 467 (9%), requests per second: 50, mean latency: 191.2 ms
[Tue Mar 13 2018 18:04:45 GMT+0100 (CET)] INFO Requests: 717 (14%), requests per second: 50, mean latency: 192.6 ms
[Tue Mar 13 2018 18:04:50 GMT+0100 (CET)] INFO Requests: 967 (19%), requests per second: 50, mean latency: 192.5 ms
[Tue Mar 13 2018 18:04:55 GMT+0100 (CET)] INFO Requests: 1217 (24%), requests per second: 50, mean latency: 190.8 ms
[Tue Mar 13 2018 18:05:00 GMT+0100 (CET)] INFO Requests: 1467 (29%), requests per second: 50, mean latency: 190.3 ms
[Tue Mar 13 2018 18:05:05 GMT+0100 (CET)] INFO Requests: 1717 (34%), requests per second: 50, mean latency: 197.5 ms
[Tue Mar 13 2018 18:05:10 GMT+0100 (CET)] INFO Requests: 1967 (39%), requests per second: 50, mean latency: 210.5 ms
[Tue Mar 13 2018 18:05:15 GMT+0100 (CET)] INFO Requests: 2217 (44%), requests per second: 50, mean latency: 190.9 ms
[Tue Mar 13 2018 18:05:20 GMT+0100 (CET)] INFO Requests: 2467 (49%), requests per second: 50, mean latency: 188.2 ms
[Tue Mar 13 2018 18:05:25 GMT+0100 (CET)] INFO Requests: 2717 (54%), requests per second: 50, mean latency: 189.3 ms
[Tue Mar 13 2018 18:05:30 GMT+0100 (CET)] INFO Requests: 2967 (59%), requests per second: 50, mean latency: 189.5 ms
[Tue Mar 13 2018 18:05:35 GMT+0100 (CET)] INFO Requests: 3217 (64%), requests per second: 50, mean latency: 188.3 ms
[Tue Mar 13 2018 18:05:40 GMT+0100 (CET)] INFO Requests: 3467 (69%), requests per second: 50, mean latency: 193.3 ms
[Tue Mar 13 2018 18:05:45 GMT+0100 (CET)] INFO Requests: 3717 (74%), requests per second: 50, mean latency: 205.3 ms
[Tue Mar 13 2018 18:05:50 GMT+0100 (CET)] INFO Requests: 3966 (79%), requests per second: 50, mean latency: 198.8 ms
[Tue Mar 13 2018 18:05:55 GMT+0100 (CET)] INFO Requests: 4217 (84%), requests per second: 50, mean latency: 195.4 ms
[Tue Mar 13 2018 18:06:00 GMT+0100 (CET)] INFO Requests: 4467 (89%), requests per second: 50, mean latency: 191.1 ms
[Tue Mar 13 2018 18:06:05 GMT+0100 (CET)] INFO Requests: 4717 (94%), requests per second: 50, mean latency: 193.1 ms
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Requests: 4967 (99%), requests per second: 50, mean latency: 190 ms
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO 
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Target URL:          https://aurora_soren-test.prisma.sh/a88/dev
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Max requests:        5000
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Concurrency level:   10
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Agent:               none
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Requests per second: 50
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO 
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Completed requests:  5000
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Total errors:        0
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Total time:          100.63438927499999 s
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Requests per second: 50
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Mean latency:        193.6 ms
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO 
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO Percentage of the requests served within a certain time
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO   50%      189 ms
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO   90%      199 ms
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO   95%      209 ms
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO   99%      280 ms
[Tue Mar 13 2018 18:06:10 GMT+0100 (CET)] INFO  100%      1208 ms (longest request)

The majority of the latency is the roundtrip Berlin-Dublin.

The same test against a local prisma running in docker:

❯ loadtest -c 10 --rps 50 -n 5000 --data '{"query":"query{\n  level1s {\n    name\n    description\n    active\n    level2s {\n      name\n      description\n      active\n      level3s {\n        name\n        description\n        active\n      }\n      level4s {\n        name\n        description\n        active\n        level5 {\n          name\n          description\n          active\n          level6 {\n            name\n            description\n            active\n          }\n        }\n      }\n    }\n  }\n}","variables":null}' -m POST -T application/json http://localhost:4466/a88/dev
[Tue Mar 13 2018 18:08:19 GMT+0100 (CET)] INFO Requests: 0 (0%), requests per second: 0, mean latency: 0 ms
[Tue Mar 13 2018 18:08:24 GMT+0100 (CET)] INFO Requests: 226 (5%), requests per second: 45, mean latency: 16.9 ms
[Tue Mar 13 2018 18:08:29 GMT+0100 (CET)] INFO Requests: 476 (10%), requests per second: 50, mean latency: 12.5 ms
[Tue Mar 13 2018 18:08:34 GMT+0100 (CET)] INFO Requests: 726 (15%), requests per second: 50, mean latency: 11.2 ms
[Tue Mar 13 2018 18:08:39 GMT+0100 (CET)] INFO Requests: 976 (20%), requests per second: 50, mean latency: 10 ms
[Tue Mar 13 2018 18:08:44 GMT+0100 (CET)] INFO Requests: 1226 (25%), requests per second: 50, mean latency: 9.7 ms
[Tue Mar 13 2018 18:08:49 GMT+0100 (CET)] INFO Requests: 1476 (30%), requests per second: 50, mean latency: 9.7 ms
[Tue Mar 13 2018 18:08:54 GMT+0100 (CET)] INFO Requests: 1726 (35%), requests per second: 50, mean latency: 8.7 ms
[Tue Mar 13 2018 18:08:59 GMT+0100 (CET)] INFO Requests: 1976 (40%), requests per second: 50, mean latency: 8.8 ms
[Tue Mar 13 2018 18:09:04 GMT+0100 (CET)] INFO Requests: 2226 (45%), requests per second: 50, mean latency: 8.3 ms
[Tue Mar 13 2018 18:09:09 GMT+0100 (CET)] INFO Requests: 2476 (50%), requests per second: 50, mean latency: 8.3 ms
[Tue Mar 13 2018 18:09:14 GMT+0100 (CET)] INFO Requests: 2726 (55%), requests per second: 50, mean latency: 8.2 ms
[Tue Mar 13 2018 18:09:19 GMT+0100 (CET)] INFO Requests: 2976 (60%), requests per second: 50, mean latency: 7.9 ms
[Tue Mar 13 2018 18:09:24 GMT+0100 (CET)] INFO Requests: 3226 (65%), requests per second: 50, mean latency: 8.2 ms
[Tue Mar 13 2018 18:09:29 GMT+0100 (CET)] INFO Requests: 3476 (70%), requests per second: 50, mean latency: 7.8 ms
[Tue Mar 13 2018 18:09:34 GMT+0100 (CET)] INFO Requests: 3726 (75%), requests per second: 50, mean latency: 7.8 ms
[Tue Mar 13 2018 18:09:39 GMT+0100 (CET)] INFO Requests: 3977 (80%), requests per second: 50, mean latency: 12.5 ms
[Tue Mar 13 2018 18:09:44 GMT+0100 (CET)] INFO Requests: 4226 (85%), requests per second: 50, mean latency: 7.9 ms
[Tue Mar 13 2018 18:09:49 GMT+0100 (CET)] INFO Requests: 4476 (90%), requests per second: 50, mean latency: 7.8 ms
[Tue Mar 13 2018 18:09:54 GMT+0100 (CET)] INFO Requests: 4726 (95%), requests per second: 50, mean latency: 8.2 ms
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Requests: 4976 (100%), requests per second: 50, mean latency: 7.9 ms
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO 
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Target URL:          http://localhost:4466/a88/dev
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Max requests:        5000
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Concurrency level:   10
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Agent:               none
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Requests per second: 50
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO 
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Completed requests:  5000
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Total errors:        0
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Total time:          100.45174908999999 s
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Requests per second: 50
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Mean latency:        9.4 ms
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO 
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO Percentage of the requests served within a certain time
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO   50%      8 ms
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO   90%      12 ms
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO   95%      15 ms
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO   99%      22 ms
[Tue Mar 13 2018 18:09:59 GMT+0100 (CET)] INFO  100%      204 ms (longest request)

Let's see if we can find the issue :-)

  • Are you running Prisma locally in docker?
  • if so, what pc are you running on? How much memory do you have?
  • How much memory is allocated to docker? I run with 4 GB during development, but less should be enough.
  • Have you tried Prisma Cloud? Be aware that the free plan limits you to 1 req/s. You can create a cluster on the paid plan and remove it again after a few hours and only pay for the duration is was active
  • Are you connecting to the default db that is created by prisma local? If so, it would be interesting to see if changing to a non-docker instance of MySQL would change anything.

When you provide these answers we can dive deeper.

@sorenbs Thank you for the reply. Here are answers to your questions:

  • I am running Prisma locally in docker container.
  • I am using Macbook Pro with 16GB of RAM.
  • Memory allocated to docker is 2 GB.
  • I am not using Prisma cloud.
  • I am currently using default database docker container created by prisma local. I will try to connect to non-docker instance of MySQL and see if there are any changes in performance and get back to you.

We are also facing performance issue with our actual application deployed on EC2 instance. I have tried to replicate the data model that we have in above github repository.

@sorenbs I just realised that you ran test against dev endpoint. My results on the other hand are for application endpoint. I have custom resolver for query in actual application since we need to perform some manipulation on data.

For dev endpoint, even I am getting good results:

loadtest -c 10 --rps 50 -n 5000 --data '{"query":"query{\n  level1s {\n    name\n    description\n    active\n    level2s {\n      name\n      description\n      active\n      level3s {\n        name\n        description\n        active\n      }\n      level4s {\n        name\n        description\n        active\n        level5 {\n          name\n          description\n          active\n          level6 {\n            name\n            description\n            active\n          }\n        }\n      }\n    }\n  }\n}","variables":null}' -m POST -T application/json http://localhost:4466/resolver-forwarding/dev
[Tue Mar 13 2018 23:23:00 GMT+0530 (IST)] INFO Requests: 0 (0%), requests per second: 0, mean latency: 0 ms
[Tue Mar 13 2018 23:23:05 GMT+0530 (IST)] INFO Requests: 236 (5%), requests per second: 47, mean latency: 15.5 ms
[Tue Mar 13 2018 23:23:10 GMT+0530 (IST)] INFO Requests: 486 (10%), requests per second: 50, mean latency: 10.9 ms
[Tue Mar 13 2018 23:23:15 GMT+0530 (IST)] INFO Requests: 736 (15%), requests per second: 50, mean latency: 9.5 ms
[Tue Mar 13 2018 23:23:20 GMT+0530 (IST)] INFO Requests: 986 (20%), requests per second: 50, mean latency: 9.1 ms
[Tue Mar 13 2018 23:23:25 GMT+0530 (IST)] INFO Requests: 1233 (25%), requests per second: 49, mean latency: 9.1 ms
[Tue Mar 13 2018 23:23:30 GMT+0530 (IST)] INFO Requests: 1485 (30%), requests per second: 50, mean latency: 9.3 ms
[Tue Mar 13 2018 23:23:35 GMT+0530 (IST)] INFO Requests: 1736 (35%), requests per second: 50, mean latency: 8.9 ms
[Tue Mar 13 2018 23:23:40 GMT+0530 (IST)] INFO Requests: 1986 (40%), requests per second: 50, mean latency: 9.2 ms
[Tue Mar 13 2018 23:23:45 GMT+0530 (IST)] INFO Requests: 2235 (45%), requests per second: 50, mean latency: 9 ms
[Tue Mar 13 2018 23:23:50 GMT+0530 (IST)] INFO Requests: 2486 (50%), requests per second: 50, mean latency: 9.5 ms
[Tue Mar 13 2018 23:23:55 GMT+0530 (IST)] INFO Requests: 2733 (55%), requests per second: 49, mean latency: 9 ms
[Tue Mar 13 2018 23:24:00 GMT+0530 (IST)] INFO Requests: 2986 (60%), requests per second: 51, mean latency: 10.8 ms
[Tue Mar 13 2018 23:24:05 GMT+0530 (IST)] INFO Requests: 3236 (65%), requests per second: 50, mean latency: 9.2 ms
[Tue Mar 13 2018 23:24:10 GMT+0530 (IST)] INFO Requests: 3486 (70%), requests per second: 50, mean latency: 8 ms
[Tue Mar 13 2018 23:24:15 GMT+0530 (IST)] INFO Requests: 3736 (75%), requests per second: 50, mean latency: 8.4 ms
[Tue Mar 13 2018 23:24:20 GMT+0530 (IST)] INFO Requests: 3986 (80%), requests per second: 50, mean latency: 7.9 ms
[Tue Mar 13 2018 23:24:25 GMT+0530 (IST)] INFO Requests: 4236 (85%), requests per second: 50, mean latency: 8 ms
[Tue Mar 13 2018 23:24:30 GMT+0530 (IST)] INFO Requests: 4486 (90%), requests per second: 50, mean latency: 8 ms
[Tue Mar 13 2018 23:24:35 GMT+0530 (IST)] INFO Requests: 4736 (95%), requests per second: 50, mean latency: 8.3 ms
[Tue Mar 13 2018 23:24:40 GMT+0530 (IST)] INFO Requests: 4986 (100%), requests per second: 50, mean latency: 8.5 ms
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Target URL:          http://localhost:4466/resolver-forwarding/dev
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Max requests:        5000
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Concurrency level:   10
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Agent:               none
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Requests per second: 50
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Completed requests:  5000
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Total errors:        0
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Total time:          100.288975631 s
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Requests per second: 50
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Mean latency:        9.3 ms
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO 
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO Percentage of the requests served within a certain time
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO   50%      8 ms
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO   90%      12 ms
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO   95%      14 ms
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO   99%      21 ms
[Tue Mar 13 2018 23:24:41 GMT+0530 (IST)] INFO  100%      47 ms (longest request)

@sorenbs / @marktani Looks like communication between prisma application and prisma service is causing latency. How does communication between two happen?

Following are the results of my actual application query when I hit application server:

loadtest -c 10 --rps 50 -n 3000 -H "Authorization: Bearer <token>"  --data '<query_string>' -m POST -T application/json http://localhost:5252/
headers: object, {"host":"localhost:5252","user-agent":"loadtest/2.3.0","accept":"*/*","authorization":" Bearer <token>"}
[Wed Mar 14 2018 11:32:08 GMT+0530 (IST)] INFO Requests: 0 (0%), requests per second: 0, mean latency: 0 ms
[Wed Mar 14 2018 11:32:13 GMT+0530 (IST)] INFO Requests: 108 (4%), requests per second: 22, mean latency: 1277.9 ms
[Wed Mar 14 2018 11:32:18 GMT+0530 (IST)] INFO Requests: 248 (8%), requests per second: 28, mean latency: 3306.1 ms
[Wed Mar 14 2018 11:32:23 GMT+0530 (IST)] INFO Requests: 398 (13%), requests per second: 30, mean latency: 5101.8 ms
[Wed Mar 14 2018 11:32:28 GMT+0530 (IST)] INFO Requests: 558 (19%), requests per second: 32, mean latency: 6982.5 ms
[Wed Mar 14 2018 11:32:33 GMT+0530 (IST)] INFO Requests: 737 (25%), requests per second: 36, mean latency: 8793.2 ms
[Wed Mar 14 2018 11:32:38 GMT+0530 (IST)] INFO Requests: 890 (30%), requests per second: 31, mean latency: 10300.3 ms
[Wed Mar 14 2018 11:32:43 GMT+0530 (IST)] INFO Requests: 1034 (34%), requests per second: 29, mean latency: 12448.2 ms
[Wed Mar 14 2018 11:32:48 GMT+0530 (IST)] INFO Requests: 1182 (39%), requests per second: 30, mean latency: 14871.5 ms
[Wed Mar 14 2018 11:32:53 GMT+0530 (IST)] INFO Requests: 1332 (44%), requests per second: 30, mean latency: 16283.8 ms
[Wed Mar 14 2018 11:32:58 GMT+0530 (IST)] INFO Requests: 1488 (50%), requests per second: 31, mean latency: 18755 ms
[Wed Mar 14 2018 11:33:03 GMT+0530 (IST)] INFO Requests: 1620 (54%), requests per second: 26, mean latency: 18581.5 ms
[Wed Mar 14 2018 11:33:08 GMT+0530 (IST)] INFO Requests: 1830 (61%), requests per second: 42, mean latency: 22081.1 ms
[Wed Mar 14 2018 11:33:13 GMT+0530 (IST)] INFO Requests: 2023 (67%), requests per second: 39, mean latency: 23796.2 ms
[Wed Mar 14 2018 11:33:18 GMT+0530 (IST)] INFO Requests: 2121 (71%), requests per second: 20, mean latency: 28738.8 ms
[Wed Mar 14 2018 11:33:23 GMT+0530 (IST)] INFO Requests: 2289 (76%), requests per second: 34, mean latency: 21667.9 ms
[Wed Mar 14 2018 11:33:28 GMT+0530 (IST)] INFO Requests: 2519 (84%), requests per second: 46, mean latency: 23700.8 ms
[Wed Mar 14 2018 11:33:33 GMT+0530 (IST)] INFO Requests: 2770 (92%), requests per second: 50, mean latency: 25539.2 ms
[Wed Mar 14 2018 11:33:38 GMT+0530 (IST)] INFO Requests: 2922 (97%), requests per second: 30, mean latency: 27548.9 ms
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO 
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Target URL:          http://localhost:5252/
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Max requests:        3000
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Concurrency level:   10
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Agent:               none
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Requests per second: 50
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO 
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Completed requests:  3000
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Total errors:        0
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Total time:          92.696615393 s
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Requests per second: 32
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Mean latency:        17300.8 ms
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO 
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO Percentage of the requests served within a certain time
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO   50%      15604 ms
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO   90%      29666 ms
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO   95%      36530 ms
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO   99%      43395 ms
[Wed Mar 14 2018 11:33:40 GMT+0530 (IST)] INFO  100%      58482 ms (longest request)

And following are the results for same query when I directly hit prisma service:

loadtest -c 10 --rps 50 -n 3000 -H "Authorization: Bearer <auth_token>"  --data '<query>' -m POST -T application/json http://localhost:4466/<app>/dev
headers: object, {"host":"localhost:4466","user-agent":"loadtest/2.3.0","accept":"*/*","authorization":" Bearer <auth_token>"}
[Wed Mar 14 2018 11:49:08 GMT+0530 (IST)] INFO Requests: 0 (0%), requests per second: 0, mean latency: 0 ms
[Wed Mar 14 2018 11:49:13 GMT+0530 (IST)] INFO Requests: 227 (8%), requests per second: 45, mean latency: 84.4 ms
[Wed Mar 14 2018 11:49:18 GMT+0530 (IST)] INFO Requests: 479 (16%), requests per second: 50, mean latency: 97.6 ms
[Wed Mar 14 2018 11:49:23 GMT+0530 (IST)] INFO Requests: 730 (24%), requests per second: 50, mean latency: 90 ms
[Wed Mar 14 2018 11:49:28 GMT+0530 (IST)] INFO Requests: 980 (33%), requests per second: 50, mean latency: 82.5 ms
[Wed Mar 14 2018 11:49:33 GMT+0530 (IST)] INFO Requests: 1226 (41%), requests per second: 49, mean latency: 161 ms
[Wed Mar 14 2018 11:49:38 GMT+0530 (IST)] INFO Requests: 1476 (49%), requests per second: 50, mean latency: 129.2 ms
[Wed Mar 14 2018 11:49:43 GMT+0530 (IST)] INFO Requests: 1730 (58%), requests per second: 51, mean latency: 92.8 ms
[Wed Mar 14 2018 11:49:48 GMT+0530 (IST)] INFO Requests: 1977 (66%), requests per second: 49, mean latency: 87.1 ms
[Wed Mar 14 2018 11:49:53 GMT+0530 (IST)] INFO Requests: 2229 (74%), requests per second: 50, mean latency: 88.3 ms
[Wed Mar 14 2018 11:49:58 GMT+0530 (IST)] INFO Requests: 2474 (82%), requests per second: 49, mean latency: 99.6 ms
[Wed Mar 14 2018 11:50:03 GMT+0530 (IST)] INFO Requests: 2729 (91%), requests per second: 51, mean latency: 95 ms
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Requests: 2978 (99%), requests per second: 50, mean latency: 85.6 ms
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO 
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Target URL:          http://localhost:4466/design-board/dev
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Max requests:        3000
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Concurrency level:   10
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Agent:               none
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Requests per second: 50
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO 
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Completed requests:  3000
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Total errors:        0
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Total time:          60.398951589999996 s
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Requests per second: 50
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Mean latency:        99.3 ms
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO 
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO Percentage of the requests served within a certain time
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO   50%      87 ms
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO   90%      149 ms
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO   95%      180 ms
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO   99%      235 ms
[Wed Mar 14 2018 11:50:08 GMT+0530 (IST)] INFO  100%      330 ms (longest request)

Hey @raeesaa 🙂

How does communication between two happen?

That entirely depends on your setup 🙂Can you share more about it?
For example, where is your GraphQL Server located, and where is your Prisma cluster located?

@marktani Both GraphQL server and prisma cluster are located on local machine.

@raeesaa Correct me if I'm wrong, but the communication happens via prisma-binding which uses http-link-dataloader under the hood. So basically it is a HTTP communication between your application and the Prisma server (where your Prisma service runs).

I could imagine that Docker network might be the culprit. Your application runs on your machine natively (not containerized), right?

@akoenig Thank you for the details.

On local, we run application natively and we have deployed dockerized application on EC2 instance. We are facing lag in both the cases.

Thank you for the details @raeesaa!

Would you be able to provide a minimal version of your application that exhibits this behaviour? It should include a package.json file with the exact version of all dependencies. If we can replicate this locally I'm confident we can find the performance bottleneck.

What version of Node are you running?

@sorenbs The version of node.js we are using is 8.9.

I have tried to replicate our application datamodel along with seed data in this repo: https://github.com/raeesaa/prisma-test-app.

Query that results in performance issue is: https://gist.github.com/raeesaa/6ef958e620020db2157b2a4889cedf99

If we remove comments field from every level, query response is much faster.

Load test result for query with comments field:

loadtest --rps 15 -n 1000 --data '{"query":"{\n  level1s {\n    name\n    comments {\n      name\n      description\n    }\n    children {\n      name\n      comments {\n        name\n        description\n      }\n      children {\n        name\n        comments {\n          name\n          description\n        }\n        children {\n          name\n          comments {\n            name\n            description\n          }\n          children {\n            name\n            comments {\n              name\n              description\n            }\n            children {\n              name\n              comments {\n                name\n                description\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n}\n","variables":null,"operationName":null}' -m POST -T application/json http://localhost:4000/

[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Target URL:          http://localhost:4000/
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Max requests:        1000
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Concurrency level:   1
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Agent:               none
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Requests per second: 15
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO 
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Completed requests:  1000
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Total errors:        0
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Total time:          97.776993374 s
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Requests per second: 10
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Mean latency:        13189.8 ms
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO 
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO Percentage of the requests served within a certain time
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO   50%      13532 ms
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO   90%      21229 ms
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO   95%      23820 ms
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO   99%      35467 ms
[Sat Mar 17 2018 15:21:20 GMT+0530 (IST)] INFO  100%      53942 ms (longest request)

Load test result for query without comments field:

 loadtest --rps 15 -n 1000 --data '{"query":"{\n  level1s {\n    name\n    children {\n      name\n      children {\n        name\n        children {\n          name\n          children {\n            name\n            children {\n              name\n            }\n          }\n        }\n      }\n    }\n  }\n}\n","variables":null,"operationName":null}' -m POST -T application/json http://localhost:4000/

[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Target URL:          http://localhost:4000/
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Max requests:        1000
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Concurrency level:   1
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Agent:               none
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Requests per second: 15
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO 
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Completed requests:  1000
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Total errors:        0
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Total time:          67.01757433600001 s
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Requests per second: 15
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Mean latency:        70 ms
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO 
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO Percentage of the requests served within a certain time
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO   50%      62 ms
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO   90%      93 ms
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO   95%      110 ms
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO   99%      146 ms
[Sat Mar 17 2018 15:25:56 GMT+0530 (IST)] INFO  100%      204 ms (longest request)

I have run tests on sample application deployed on local cluster.

Please let me know if this helps or if any other inputs are needed.

@sorenbs Any updates on this?

@raeesaa Sorry about the delay. I believe the issue is caused by the way the Prisma binding employs the data loader pattern. As such, the issue is not with Prisma itself. We will look into this soon and provide a solution.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

AlessandroAnnini picture AlessandroAnnini  ·  3Comments

sedubois picture sedubois  ·  3Comments

thomaswright picture thomaswright  ·  3Comments

notrab picture notrab  ·  3Comments

nikolasburk picture nikolasburk  ·  3Comments