DEV Community

Cover image for GraphQL Pagination

GraphQL Pagination

El Housseine Jaafari on February 18, 2020

Hey there! Glad you are here and welcome to GraphQL Pagination tutorial. The goal of this tutorial is to guide you through setting up your paginati...
Collapse
 
acidic9 profile image
Ari Seyhun • Edited

Thanks for this article El Housseine Jaafari!

I am implementing this for myself and using the ID as the cursor works just fine when sorting by time, but in my case I also want to sort by "rating" (a column in my database).

Would it be appropriate to make the cursor be in the shape of {rating}:{created_at} encoded as base64?
For example: 30:1613828846856 for a rating of 30 with the timestamp as 1613828846856 (which would be MzA6MTYxMzgyODg0Njg1Ng== encoded as base64).

I'm not sure if this is hacky but it's the only kind of solution I can come up with for an SQL which ends with ORDER BY rating, created_at

Collapse
 
jefferyhus profile image
El Housseine Jaafari

Your case is not easy in case of using a relational database. But there is the solution to create a view from your query and add an attribute that will be a combination of the rating & the ID and then use the view to fetch data from it and this way the pagination will work like a charm for you.

Collapse
 
chihab profile image
Chihab Otmani

Great article El Housseine!

How would the resolver's implementation be if the ids are not in sequence (ordered)?

Collapse
 
jefferyhus profile image
El Housseine Jaafari

Thanks for your comments and feedback :)

Your question is a good one, in the article I mention in the Cons part that using the cursor pagination style can not be achieved if you don't have a unique & sequential key.

Most of cases we have createdAt which can be used as a cursor key in case your primary key is of UUIDv4 or similar.

Collapse
 
chihab profile image
Chihab Otmani • Edited

I see, makes sense!

The issue is that createdAt might not be unique so we could have items on a page re-displayed on the next page.

We could probably use something like a db row_index function (the index of the items in a query result) or (if not supported by the db engine) still use the id and get the X next values after the row that has that ID (probably not a simple query though).

Thread Thread
 
jefferyhus profile image
El Housseine Jaafari

That would complicate things lot. You can go for a timestamp value in your createdAt for example, I don't believe there will be more than one row inserted in the same microsecond, isn't it right? 🤔

Thread Thread
 
chihab profile image
Chihab Otmani • Edited

Regarding timestamps, that would be a good solution indeed:

  • if the application does not have a lot of traffic that might introduce timestamp collisions (which is the case of my crappy apps ;))
  • if you already have a timestamp in your database table/collection or have the ability to add one (think about read-only legacy databases)
  • to avoid collisions on timestamps and be able to query next elements without duplicates, timestamps would need to be created in sequence with a random suffix based on time, not sure that this is always provided OOB by db engines.

Actually, if supported, row index is pretty straightforward, you simply add it to your select and you get an index of each row in the result, that would be the cursor value I guess.
Example here for postgresql: postgresqltutorial.com/postgresql-...
Do you see any issue on using ROW_NUMBER()?

PS: Just rambling on here, I'll probably use timestamps on my apps, as you've suggested ;)

Thread Thread
 
jefferyhus profile image
El Housseine Jaafari

The only issue I see with using ROW_NUMBER is that you will have to always select all results from a table, assign a row number then only pull the results you want. This is exactly how limit & offset works. So it isn't going to be of any help if your database is growing fast.

As I said if you include microseconds in your timestamp value, there mostly will be no collision.

Thread Thread
 
chihab profile image
Chihab Otmani • Edited

You're right! You'd just have to have these micro-timestamps in your database and have end-users tolerant to potential (even rare) collisions otherwise I guess you'd have to deal with less performant fetching strategies.

Collapse
 
mouadziani profile image
Mouad ZIANI

Thanks for sharing jeffery

Collapse
 
jefferyhus profile image
El Housseine Jaafari

sure thing mate :)