Prisma1: Remove pagination limits

Created on 18 Jun 2017  路  6Comments  路  Source: prisma/prisma1

The limit of 1000 nodes per pagination query can be quite a hindrance when running scripts or migrations.

Most helpful comment

when running scripts or migrations.

Not only. sometimes you want the client to grab the whole table, when it's a functional and lean one, for example, TAGs key/id mappings - search/auto-complete/nlp you'd do purely on the client-side.

All 6 comments

when running scripts or migrations.

Not only. sometimes you want the client to grab the whole table, when it's a functional and lean one, for example, TAGs key/id mappings - search/auto-complete/nlp you'd do purely on the client-side.

Pagination can be very limiting for backend processes - especially when combined with

  • The lack of aggregation
  • The execution time limit of AWS Lambda

The lack of aggregation forces us to fetch all the data to perform simple groupBy/count aggregations.

In the second case, the problem is that also running separate queries, or multiple queries in a single request, might take too long to run on Lambda.

We might get around this by structuring our data differently, though, or by moving away from AWS Lambda.

Right now, I just hack my way around it, which is ugly and lose-lose for both of us..
For example, grab a lean list of 3200 tags:

query allTags {
  a1: allTags(first:1000) {
    ...TagFragment
  }
  a2: allTags(first:1000, skip:1000) {
    ...TagFragment
  }
  a3: allTags(first:1000, skip:2000) {
    ...TagFragment
  }
  a4: allTags(first:1000, skip:3000) {
    ...TagFragment
  }
  _allTagsMeta {
    count
  }
}

then, merge them all on post-response.. silly, right?

The only alternative I found is to bundle all tags in the build process as static json, and only fetch the diff (_updated since build timestamp_). I opted against, as it just feels like another hack, doesn't save much network bytes, and is another (slow) step in the build.

I did something similar to @oori . Probably not the best way to go about it, but it gave me the results I was looking for.

const allClubsQuery = `
  query allClubs(
    $skipNum: Int!
  ) {
    allClubs(
      first: 1000
      skip: $skipNum
    ) {
      id
      clubRef
    }
  }
`  
let clubIdMap = []
for (let i = 0; i < 15; i++) {
    let skipNum = 1000 * i
    let clubData = await request(endpoint, allClubsQuery, { skipNum })
    clubIdMap = clubIdMap.concat(clubData.allClubs)
}
await Promise.all(clubIdMap)`

Further being discussed in #748.

The Documentation states: "a maximum of 1000 nodes can be returned per pagination field."

However it should also state that this 1000 node limitation applied to ALL queries.

I assumed that I could grab all nodes so long as I was not specifying pagination.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

marktani picture marktani  路  3Comments

marktani picture marktani  路  3Comments

thomaswright picture thomaswright  路  3Comments

schickling picture schickling  路  3Comments

ragnorc picture ragnorc  路  3Comments