EXPLAIN EXTENDED

How to create fast database queries

MySQL ORDER BY / LIMIT performance: late row lookups

with 26 comments

From Stack Overflow:

When I run an SQL command like the one below, it takes more than 15 seconds:

SELECT  *
FROM    news
WHERE   cat_id = 4
ORDER BY
        id DESC
LIMIT   150000, 10

EXPLAIN shows that its using where and the index on (cat_id, id)

LIMIT 20, 10 on the same query only takes several milliseconds.

This task can be reformulated like this: take the last 150,010 rows in id order and return the first 10 of them

It means that though we only need 10 records we still need to count off the first 150,000.

The table has an index which keeps the records ordered. This allows us not to use a filesort. However, the query is still far from being efficient: 15 seconds for 150,000 records (which are already ordered) is way too much.

To better understand the reason behind the low performance let's look into this picture:

Structure

As we already said before, there is an index created on the table. Logically, an index is a part of a table which is not even visible from the SQL side: all queries are issued against the table, not the index, and the optimizer decides whether to use the index or not.

However, physically, an index is a separate object in the database.

An index is a shadow copy of the table which stores some subset of the table's data:

  1. Index key, i. e. the columns which the index created on.
  2. Table pointer, that is some value that uniquely identifies a row the record reflects. In InnoDB, it is the value of the PRIMARY KEY; in MyISAM, it is an offset in the .MYD file.

The index records are stored in a B-Tree structure which make the ref and range searching on them super fast.

However, the index itself does not contain all table's data: only the subset we described above. To find the actual table values while preserving order, one needs to join the index and the table. That is for each index record the engine should find the corresponding table record (using the row pointer) and return all non-indexed values from the table itself.

Here's how it looks:

All

The process of fetching the table records corresponding to the index records is called row lookup. It is pictured by the curvy arrows connecting the index and the table.

Since the index records and the table records are located far away from each other in the memory and on the disk, the row lookup leads to much more page misses, cache misses and disk seeks that a sequential access and is therefore quite costly. It takes much time to traverse all the connectors on the picture above.

If we do a plain query which returns all the records we of course need to fetch all the records and each row lookup is necessary.

But do we really need it when we use a LIMIT clause with the offset greater than 0?

If we did something like LIMIT 8, 2 we could just skip the first 8 values using the index and the return the remaining two:

Late row lookup

We see that this is a much more efficient algorithm that will save us lots of row lookups.

This is called late row lookup: the engine should look a row up only if there is no way to avoid it. If there is a chance that a row will be filtered out by using the indexed fields only, it should be done before the rows are looked up in the actual MySQL table. There is no point in fetching the records out of the table just to discard them.

However, MySQL always does early row lookup: it searches for a row prior to checking values in the index, even the optimizer decided to use the index.

Let's create a sample table and try to reproduce this behavior:

CREATE TABLE filler (
        id INT NOT NULL PRIMARY KEY AUTO_INCREMENT
) ENGINE=Memory;

CREATE TABLE t_limit (
        id INT NOT NULL,
        value VARCHAR(20) NOT NULL,
        stuffing VARCHAR(200) NOT NULL,
        CONSTRAINT pk_limit_id PRIMARY KEY (id)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

DELIMITER $$

CREATE PROCEDURE prc_filler(cnt INT)
BEGIN
        DECLARE _cnt INT;
        SET _cnt = 1;
        WHILE _cnt <= cnt DO
                INSERT
                INTO    filler
                SELECT  _cnt;
                SET _cnt = _cnt + 1;
        END WHILE;
END
$$

DELIMITER ;

START TRANSACTION;
CALL prc_filler(200000);
COMMIT;

INSERT
INTO    t_limit
SELECT  id, CONCAT('Value ', id), RPAD('', 200, '*')
FROM    filler;

This MyISAM table contains 200,000 records and has a PRIMARY KEY index on id. Each record is filled with 200 bytes of stuffing data.

Here's the query to select values from 150,001 to 150,010:

SELECT  id, value, LENGTH(stuffing) AS len
FROM    t_limit
ORDER BY
        id
LIMIT 150000, 10

View query details

This query works for almost 6 seconds which is way too long.

It, however, uses a filesort which the optimizer considered more efficient than using the index. This would make sense if not for the stuffing field which is too long to be sorted efficiently. In this very case traversing the index would be faster.

Let's try to force the index:

SELECT  id, value, LENGTH(stuffing) AS len
FROM    t_limit FORCE INDEX (PRIMARY)
ORDER BY
        id
LIMIT 150000, 10

View query details

Now it is only 1.23 seconds but still too long due to the early row lookups.

We, however, can trick MySQL to use the late row lookups.

We will only select the id in the subquery with an ORDER BY and LIMIT and then join the original table back on id.

This will make each individual row lookup less efficient, since each join will require looking up the index value again. However, this is not much of a deal, and the total number of lookups will be reduced greatly, so overall performance increase is expected:

SELECT  l.id, value, LENGTH(stuffing) AS len
FROM    (
        SELECT  id
        FROM    t_limit
        ORDER BY
                id
        LIMIT 150000, 10
        ) o
JOIN    t_limit l
ON      l.id = o.id
ORDER BY
        l.id

View query details

This is only 75 ms, or 16 times as fast as the previous query.

Note that we put an additional ORDER BY to the end of the query, since the order is not guaranteed to be preserved after the join. This resulted in an extra filesort in the plan. However, the actual plan used outputs the values already sorted and this filesort, therefore, will require only a single pass over 10 values which is instant.

Written by Quassnoi

October 23rd, 2009 at 11:00 pm

Posted in MySQL

26 Responses to 'MySQL ORDER BY / LIMIT performance: late row lookups'

Subscribe to comments with RSS

  1. your sql is kind of complex in this case
    you may write you sql in this more effective way:
    SELECT id, value, LENGTH(stuffing) AS len
    FROM t_limit FORCE INDEX (PRIMARY)
    where id>150000
    ORDER BY id
    limit 10

    liuyuxun

    11 Jan 13 at 12:30

  2. @liuyuxun: what if ids are not consecutive?

    Quassnoi

    11 Jan 13 at 15:38

  3. Wow that’s horrible. Do you know if Postgres also suffers from that?

    Martin S.

    2 Mar 13 at 22:23

  4. @Martin: until 9.2, PostgreSQL was not capable of index-only scans at all, since index records do not store information of transaction which created them and hence their visibility could not be determined without examining the table record.

    Quassnoi

    3 Mar 13 at 13:51

  5. I don’t understand how come this “late row lookup” is not part of MySQL query optimizer? It’s such obvious and simple implementation, after all. Or I am missing something here!

    Ross

    13 Jan 14 at 09:16

  6. @ross: well I believe mysql devs would give a better answer to that

    Quassnoi

    13 Jan 14 at 09:30

  7. I was referred to this site after posting a question on SO. I think this solution only abstract away the slowness due to optimizer not using the key, but does not abstract away the problem of scanning the index in the wrong direction due to the large offset.

    One way to make it even faster is to calculate the offset in the opposite direction when offset is larger than half the total rows and then sorting it in the opposite order before reversing the order back again from the result.

    This will make the query blazing fast at both head and tail end of the query.

    Question Overflow

    24 Feb 14 at 13:20

  8. @QuestionOverflow: this would make sense for MyISAM if it were not dead.

    Quassnoi

    24 Feb 14 at 13:23

  9. Sorry, I wrote too early. I miss the point that you have no WHERE clause within your subquery, which explains why you are able to reduce the query time to 75ms despite having a large offset.

    Question Overflow

    24 Feb 14 at 13:26

  10. @Quassnoi, what do you recommend as a replacement for MyISAM for a read-heavy server?

    Question Overflow

    24 Feb 14 at 13:37

  11. @QuestionOverflow: InnoDB or its flavors

    Quassnoi

    24 Feb 14 at 13:39

  12. @Quassnoi, I tried InnoDB before and felt that it is still much slower than MyISAM. That is why I am still sticking to it although the trend now is more towards InnoDB.

    Question Overflow

    24 Feb 14 at 13:54

  13. @QuestionOverflow: good.

    Quassnoi

    24 Feb 14 at 13:55

  14. Q:
    If I use the following sql, will it be more sufficient?

    SELECT l.id, value, LENGTH(stuffing) AS len
    FROM t_limit l
    WHERE l.id in (SELECT id where FROM t_limit ORDER BY id LIMIT 150000, 10) o
    ORDER BY l.id;

    YueGang

    5 Dec 14 at 11:24

  15. My fault!

    YueGang

    5 Dec 14 at 12:11

  16. I tried this

    EXPLAIN EXTENDED SELECT * FROM products JOIN (SELECT ID FROM products ORDER BY createdAt LIMIT 1000, 100) AS t ON t.ID = t_nutrient.ID;

    and this

    EXPLAIN EXTENDED select * FROM t_nutrient order by createdAt limit 1000,100;

    The second one faster

    budyk

    22 Dec 14 at 12:46

  17. @budyk: good.

    Quassnoi

    22 Dec 14 at 12:50

  18. Does this work on queries with multiple joins and ORDER BY string as well?

    Armin

    23 Nov 17 at 23:30

  19. @Armin: does what exactly work?

    Quassnoi

    23 Nov 17 at 23:47

  20. In reality, wouldn’t you normally want some kind of WHERE condition somewhere? Wouldn’t this undermine the strategy outlined above?

    ZC

    15 Mar 18 at 14:14

  21. @ZC: there is a WHERE condition in the original query, that’s exactly what causes the problem. If not for the WHERE condition, MySQL would probably read right from the table itself and not doing the PK lookups at all.

    Quassnoi

    15 Mar 18 at 15:03

  22. LOL. The author needs to do some reading on how InnoDB stores its data. Hint: InnoDB tables are what’s called clustered tables ie there is no separate PK index.

    Andrey K

    19 Apr 18 at 23:09

  23. @andrey K,

    Given that the author is clearly referring secondary indexes, he is strictly correct.

    While it’s petty and mean-spirited to ridicule a helpful article and author like this 5 years after its publication; it’s laughable you attempt to do so because you cant properly infer the intent and context of the author.

    Further, your attempt at being hyper-correct has made you incorrect. The pk IS a separate structure and concept in innodb tables, the special behavior you are incorrectly attempting to rub in the authors face is actually simply that the primary key for a given record acts as a pointer to the page that contains the record data itself. In early versions of 5, the PK and data were literally on the same page (i believe this caused issues with key length for both primary and private keys, but dont quote me on this). Things may have change in more recent versions of My, but honestly, not worth the effort to find a more detailed elaboration.

    https://docs.oracle.com/cd/E17952_01/mysql-5.1-en/innodb-table-and-index.html
    https://dev.mysql.com/doc/refman/8.0/en/innodb-index-types.html

    tl;dr. if you dont have something nice to say, dont say anything at all; If you dont have anything correct to contribute, then go stand in the corner and be quiet.

    zentechinc

    29 Jan 19 at 01:01

  24. Hi,

    Very good this article save me !

    Thanks

    Helder

    15 Mar 21 at 11:35

  25. This is awsome, good content with excellent explaination not only write how to but also why is happening. I saw it write in 2009, and of course if we use seek it will more faster than offset. But the seek method have a weakness if use for paging if the record is have being delete.

    MrTSCode

    11 Feb 22 at 06:27

  26. The middle step helps speed because MyISAM’s `PRIMARY KEY(id)` is pairs of id and a row pointer (offset into .MYD file or row number, if FIXED length.

    In InnoDB, the PRIMARY KEY is stored in the data’s BTree, so you can’t get the “late lookup” as easily.

    Still, your use of a subquery that finds the 10 ids, then does fetches other columns, will work nicely for “late” loading.

    (Meanwhile, you should edit this blog to point out that you are using the deprecated MyISAM engine. Also, please be clear about what works in the preferred InnoDB. Talking about MariaDB’s Aria would be a nice bonus.)

    Rick James

    4 Aug 22 at 09:09

Leave a Reply