I have a directory that I\'ve built with the PHP script below and it uses pagination to get 1002 results per page. The problem is that the farther you get in the pages, the
LIMIT with an offset is extremely slow in most databases (I've found some documentation to this effect for MySQL and I'm trying to find a really good article I read a while ago explaining this for SQLite). The reason is that it's generally implemented something like this:
LIMIT
clause wasn't thereWhat this means if that if you do LIMIT 10000, 10
, it will be interpreted as:
There's a trivial optimization where you can at least use the index for the first 10,000 results since you don't care about their values, but even in that case, the database still needs to walk through 10,000 index values before giving you your 10 results. There may be further optimizations that can improve this, but in the general case you don't want to do use LIMIT
with an offset for large values.
The most efficient way to handle pagination that I'm aware of is to keep track of the last index, so if page one ends on id = 5
, then make your next link have WHERE id > 5
(with a LIMIT x
of course).
EDIT: Found the article for SQLite. I highly recommend you read this since it explains The Right Way™ to do things in SQL. Since the SQLite people are really smart and other databases have this same problem, I assume MySQL implements this in a similar way.
Another error that crops up frequently is programmers trying to implement a scrolling window using LIMIT and OFFSET. The idea here is that you first just remember the index of the top entry in the display and run a query like this:
SELECT title FROM tracks WHERE singer='Madonna' ORDER BY title LIMIT 5 OFFSET :index
The index is initialized to 0. To scroll down just increment the index by 5 and rerun the query. To scroll up, decrement the index by 5 and rerun.
The above will work actually. The problem is that it gets slow when the index gets large. The way OFFSET works in SQLite is that it causes the sqlite3_step() function to ignore the first :index breakpoints that it sees. So, for example, if :index is 1000, you are really reading in 1005 entries and ignoring all but the last 5. The net effect is that scrolling starts to become sluggish as you get lower and lower in the list.