What method of pagination are you currently using?

What method of pagination are you currently using?

Daily short news for you
  • For over a week now, I haven't posted anything, not because I have nothing to write about, but because I'm looking for ways to distribute more valuable content in this rapidly exploding AI era.

    As I shared earlier this year, the number of visitors to my blog is gradually declining. When I looked at the statistics, the number of users in the first six months of 2025 has dropped by 30% compared to the same period last year, and by 15% compared to the last six months of 2024. This indicates a reality that users are gradually leaving. What is the reason for this?

    I think the biggest reason is that user habits have changed. They primarily discover the blog through search engines, with Google being the largest. Almost half of the users return to the blog without going through the search step. This is a positive signal, but it's still not enough to increase the number of new users. Not to mention that now, Google has launched the AI Search Labs feature, which means AI displays summarized content when users search, further reducing the likelihood of users accessing the website. Interestingly, when Search Labs was introduced, English articles have taken over the rankings for the most accessed content.

    My articles are usually very long, sometimes reaching up to 2000 words. Writing such an article takes a lot of time. It's normal for many articles to go unread. I know and accept this because not everyone encounters the issues being discussed. For me, writing is a way to cultivate patience and thoughtfulness. Being able to help someone through my writing is a wonderful thing.

    Therefore, I am thinking of focusing on shorter and medium-length content to be able to write more. Long content will only be used when I want to write in detail or delve deeply into a particular topic. So, I am looking for ways to redesign the blog. Everyone, please stay tuned! 😄

    » Read more
  • CloudFlare has introduced the pay per crawl feature to charge for each time AI "crawls" data from your website. What does that mean 🤔?

    The purpose of SEO is to help search engines see the website. When users search for relevant content, your website appears in the search results. This is almost a win-win situation where Google helps more people discover your site, and in return, Google gets more users.

    Now, the game with AI Agents is different. AI Agents have to actively seek out information sources and conveniently "crawl" your data, then mix it up or do something with it that we can't even know. So this is almost a game that benefits only one side 🤔!?

    CloudFlare's move is to make AI Agents pay for each time they retrieve data from your website. If they don’t pay, then I won’t let them read my data. Something like that. Let’s wait a bit longer and see 🤓.

    » Read more
  • Continuing to update on the lawsuit between the Deno group and Oracle over the name JavaScript: It seems that Deno is at a disadvantage as the court has dismissed the Deno group's complaint. However, in August, they (Oracle) must be held accountable for each reason, acknowledging or denying the allegations presented by the Deno group in the lawsuit.

    JavaScript™ Trademark Update

    » Read more

The Problem

Pagination is one of the basic requirements for APIs that retrieve data in the form of a list. Pagination helps reduce the amount of data that needs to be queried and transmitted, as fetching all the data in a long list is inefficient for most common features.

In this article, I will present two popular and easily implemented pagination techniques. Each method has its pros and cons, and I will discuss when to use them in different scenarios.

Pagination Using LIMIT & OFFSET

/articles?limit=10&offset=0

The above URL is likely familiar to many people, as it represents pagination using limit and offset. The working principle is quite simple: limit is the record limit and offset is the starting point for retrieving data after skipping offset rows.

The example above will retrieve 10 rows starting from the first row.

The representation of this pagination technique is a feature that looks like the image below.

Pagination Using limit offset

Pages like 1, 2, 3... up to 30 are displayed, allowing users to easily navigate to the desired page to view its content.

On the server-side, most of the data retrieval based on limit and offset is done through the database. Nowadays, almost every database supports query syntax with limit and offset. For example, in PostgreSQL:

SELECT * FROM articles LIMIT 10 OFFSET 0;

With each limit and offset set by the user, you can substitute them into the query to obtain the desired results.

During continuous data retrieval by users, if a record is added or deleted at that time, it may cause duplication or missing data in the next page. This is because the added record pushes the next data down, while the deleted record brings the data up.

Another drawback is that when dealing with a large number of records, querying with offset can be slow. This is because of how databases handle offset. Most offset queries require traversing through all the rows until reaching the desired offset count before starting to retrieve data. For example, if you have an offset of 1,000,000, it has to traverse 1 million rows before starting to retrieve data starting from row 1,000,001.

In summary, offset and limit are suitable when you want to quickly implement pagination with a displayed list based on sequential numbers, allowing users to quickly navigate to the desired page. However, it is important to consider using this technique with a large dataset, as it can impact performance.

Pagination Using Cursor

Have you ever encountered this type of pagination? It only consists of two buttons: Next and Previous, as shown below.

Pagination Using Cursor

Most likely, you are using pagination using cursor. What is special about this method is that there is no list of page numbers like the limit and offset technique mentioned above.

A cursor-based pagination URL may look like this:

/articles?cursor=4n5pxq24kp

The working principle of the cursor is quite simple. In the first query to retrieve the list, it returns a cursor, which is then used to retrieve data on the next page. This process continues until the cursor no longer returns data, indicating that the data has been exhausted.

{
    "articles": [...],  
    "next_cursor": "4n5pxq24kn",  
    "prev_cursor": "4n5pxq24kp",  
}

It is evident that with this technique, you cannot jump to a specific page because the "cursor" is only returned after each call. Typically, the cursor is encoded according to rules known only to the server, and when passed back, it will decode it to retrieve the necessary data inside.

On the server-side, the query does not use LIMIT and OFFSET, but rather the "greater than or equal" (>=) comparison combined with data indexing.

For example, assume that the cursor 4n5pxq24kp was encoded from id = 10 by the server. When retrieving the next page, the query would be similar to:

SELECT * FROM articles WHERE id > 10 LIMIT 10;

As you can see, if you index the id field of the articles table, the query does not need to traverse through the first 10 records to skip them, but instead retrieves the data after the 10th record. The time complexity at this point is O(1).

This method also handles the case where data is added or deleted during pagination. This is because it does not rely on offset to retrieve the next record but instead utilizes the comparison operation. For example, if the 10th record is deleted, the next page would still retrieve a complete set of 10 records starting from the 10th record instead of the 9th if using offset.

It is clear that this approach provides higher performance compared to limit and offset, but it may be more complex and time-consuming to implement. Users cannot navigate directly to a desired page, and you must have a sortable field with an index to make it work.

Conclusion

Above, I presented two popular pagination techniques. limit and offset are simple and easy to implement, but they face performance issues with large datasets. The cursor method provides good performance but has some limitations when used with higher complexity. Each method has its pros and cons, and it is not necessary to choose cursor just for its better performance. Instead, I recommend choosing the appropriate method based on the problem you are trying to solve.

Premium
Hello

Me & the desire to "play with words"

Have you tried writing? And then failed or not satisfied? At 2coffee.dev we have had a hard time with writing. Don't be discouraged, because now we have a way to help you. Click to become a member now!

Have you tried writing? And then failed or not satisfied? At 2coffee.dev we have had a hard time with writing. Don't be discouraged, because now we have a way to help you. Click to become a member now!

View all

Subscribe to receive new article notifications

or
* The summary newsletter is sent every 1-2 weeks, cancel anytime.

Comments (0)

Leave a comment...