Describe the bug
When running a "deleteManyX" mutation against a table with a large number of records, an error is returned with a message of
Whoops. Looks like an internal server error. Search your server logs for request ID: local:api:cjkcx8x7rvcxy0871suxurkfr
Looking at the prisma logs, I found the following:
[Bugsnag - local / testing] Error report: com.bugsnag.Report@54935f02
org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:333)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:144)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at com.prisma.api.connector.jdbc.database.BuilderBase.$anonfun$deleteToDBIO$1(BuilderBase.scala:70)
at com.prisma.api.connector.jdbc.database.BuilderBase.$anonfun$deleteToDBIO$1$adapted(BuilderBase.scala:70)
at com.prisma.api.connector.jdbc.database.BuilderBase.$anonfun$jooqToDBIO$1(BuilderBase.scala:87)
at slick.jdbc.SimpleJdbcAction.run(StreamingInvokerAction.scala:70)
at slick.jdbc.SimpleJdbcAction.run(StreamingInvokerAction.scala:69)
at slick.dbio.DBIOAction$$anon$4.$anonfun$run$3(DBIOAction.scala:239)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
at scala.collection.IterableLike.foreach(IterableLike.scala:71)
at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at slick.dbio.DBIOAction$$anon$4.run(DBIOAction.scala:239)
at slick.dbio.DBIOAction$$anon$4.run(DBIOAction.scala:237)
at slick.basic.BasicBackend$DatabaseDef$$anon$2.liftedTree1$1(BasicBackend.scala:275)
at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:275)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Tried to send an out-of-range integer as a 2-byte value: 44446
at org.postgresql.core.PGStream.sendInteger2(PGStream.java:224)
at org.postgresql.core.v3.QueryExecutorImpl.sendParse(QueryExecutorImpl.java:1440)
at org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:1762)
at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1326)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:298)
... 25 more
To Reproduce
mutation {
deleteManyUsers {
count
}
}
Expected behavior
I expect this to not produce errors and delete all the records.
Versions (please complete the following information):
prisma CLI: prisma/1.13.4 (darwin-x64) node-v10.8.0I can confirm this bug using the Postgres connector.
Additional reproduction setup:
type User {
id: ID! @unique
name: String!
}
prisma import -d nodes.zip
I just confirmed that this is specific to the Postgres connector. I cannot reproduce this using the MySQL connector.
This might be related to the maximum number of parameters the respective DBs accept in a query.
To be clear, this worked on the previous version that my team was using (1.11.1). If there is anything I can do to help track this down further, please let me know.
Thanks for the offer, but I'm pretty sure it's related to this. https://github.com/jOOQ/jOOQ/issues/5701 We started using jOOQ in 1.12. I just need to find out how to fix it since from the issue it sounds like it should already be handled by jOOQ.
Gotcha, thanks for the information! The support from everyone on the prisma team on bug reports like this has really impressed me.
We just released a fix for this in 1.13.6
@do4gr Thanks for your quick action on this. You guys have really impressed me and my team with your response to bug reports.
Most helpful comment
We just released a fix for this in 1.13.6