When paging through data that comes from a DB, you need to know how many pages there will be to render the page jump controls.
Currently I do that by running the que
You could mitigate the performance penalty by not running the COUNT() query every time. Cache the number of pages for, say 5 minutes before the query is run again. Unless you're seeing a huge number of INSERTs, that should work just fine.
Since Postgres already does a certain amount of caching things, this type of method isn't as inefficient as it seems. It's definitely not doubling execution time. We have timers built into our DB layer, so I have seen the evidence.
Seeing as you need to know for the purpose of paging, I'd suggest running the full query once, writing the data to disk as a server-side cache, then feeding that through your paging mechanism.
If you're running the COUNT query for the purpose of deciding whether to provide the data to the user or not (i.e. if there are > X records, give back an error), you need to stick with the COUNT approach.
Things have changed since 2008. You can use a window function to get the full count and the limited result in one query. Introduced with PostgreSQL 8.4 in 2009.
SELECT foo
, count(*) OVER() AS full_count
FROM bar
WHERE <some condition>
ORDER BY <some col>
LIMIT <pagesize>
OFFSET <offset>;
Note that this can be considerably more expensive than without the total count. All rows have to be counted, and a possible shortcut taking just the top rows from a matching index may not be helpful any more.
Doesn't matter much with small tables or full_count
<= OFFSET
+ LIMIT
. Matters for a substantially bigger full_count
.
Corner case: when OFFSET
is at least as great as the number of rows from the base query, no row is returned. So you also get no full_count
. Possible alternative:
SELECT
query( 0. CTEs are evaluated and materialized separately. In Postgres 12 or later the planner may inline those like subqueries before going to work.) Not here.
WHERE
clause (and JOIN
conditions, though none in your example) filter qualifying rows from the base table(s). The rest is based on the filtered subset.( 2. GROUP BY
and aggregate functions would go here.) Not here.
( 3. Other SELECT
list expressions are evaluated, based on grouped / aggregated columns.) Not here.
Window functions are applied depending on the OVER
clause and the frame specification of the function. The simple count(*) OVER()
is based on all qualifying rows.
ORDER BY
( 6. DISTINCT
or DISTINCT ON
would go here.) Not here.
LIMIT
/ OFFSET
are applied based on the established order to select rows to return.LIMIT
/ OFFSET
becomes increasingly inefficient with a growing number of rows in the table. Consider alternative approaches if you need better performance:
There are completely different approaches to get the count of affected rows (not the full count before OFFSET
& LIMIT
were applied). Postgres has internal bookkeeping how many rows where affected by the last SQL command. Some clients can access that information or count rows themselves (like psql).
For instance, you can retrieve the number of affected rows in plpgsql immediately after executing an SQL command with:
GET DIAGNOSTICS integer_var = ROW_COUNT;
Details in the manual.
Or you can use pg_num_rows in PHP. Or similar functions in other clients.
Related:
As I describe on my blog, MySQL has a feature called SQL_CALC_FOUND_ROWS. This removes the need to do the query twice, but it still needs to do the query in its entireity, even if the limit clause would have allowed it to stop early.
As far as I know, there is no similar feature for PostgreSQL. One thing to watch out for when doing pagination (the most common thing for which LIMIT is used IMHO): doing an "OFFSET 1000 LIMIT 10" means that the DB has to fetch at least 1010 rows, even if it only gives you 10. A more performant way to do is to remember the value of the row you are ordering by for the previous row (the 1000th in this case) and rewrite the query like this: "... WHERE order_row > value_of_1000_th LIMIT 10". The advantage is that "order_row" is most probably indexed (if not, you've go a problem). The disadvantage being that if new elements are added between page views, this can get a little out of synch (but then again, it may not be observable by visitors and can be a big performance gain).