Why does the order of the loops affect performance when iterating over a 2D array?

后端 未结 7 680
Happy的楠姐
Happy的楠姐 2020-11-22 06:15

Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. C

相关标签:
7条回答
  • 2020-11-22 06:46

    The reason is cache-local data access. In the second program you're scanning linearly through memory which benefits from caching and prefetching. Your first program's memory usage pattern is far more spread out and therefore has worse cache behavior.

    0 讨论(0)
  • 2020-11-22 06:46

    This line the culprit :

    x[j][i]=i+j;
    

    The second version uses continuous memory thus will be substantially faster.

    I tried with

    x[50000][50000];
    

    and the time of execution is 13s for version1 versus 0.6s for version2.

    0 讨论(0)
  • 2020-11-22 06:52

    Version 2 will run much faster because it uses your computer's cache better than version 1. If you think about it, arrays are just contiguous areas of memory. When you request an element in an array, your OS will probably bring in a memory page into cache that contains that element. However, since the next few elements are also on that page (because they are contiguous), the next access will already be in cache! This is what version 2 is doing to get it's speed up.

    Version 1, on the other hand, is accessing elements column wise, and not row wise. This sort of access is not contiguous at the memory level, so the program cannot take advantage of the OS caching as much.

    0 讨论(0)
  • 2020-11-22 06:55

    Besides the other excellent answers on cache hits, there is also a possible optimization difference. Your second loop is likely to be optimized by the compiler into something equivalent to:

      for (j=0; j<4000; j++) {
        int *p = x[j];
        for (i=0; i<4000; i++) {
          *p++ = i+j;
        }
      }
    

    This is less likely for the first loop, because it would need to increment the pointer "p" with 4000 each time.

    EDIT: p++ and even *p++ = .. can be compiled to a single CPU instruction in most CPU's. *p = ..; p += 4000 cannot, so there is less benefit in optimising it. It's also more difficult, because the compiler needs to know and use the size of the inner array. And it does not occur that often in the inner loop in normal code (it occurs only for multidimensional arrays, where the last index is kept constant in the loop, and the second to last one is stepped), so optimisation is less of a priority.

    0 讨论(0)
  • 2020-11-22 06:59

    Nothing to do with assembly. This is due to cache misses.

    C multidimensional arrays are stored with the last dimension as the fastest. So the first version will miss the cache on every iteration, whereas the second version won't. So the second version should be substantially faster.

    See also: http://en.wikipedia.org/wiki/Loop_interchange.

    0 讨论(0)
  • 2020-11-22 07:04

    As others have said, the issue is the store to the memory location in the array: x[i][j]. Here's a bit of insight why:

    You have a 2-dimensional array, but memory in the computer is inherently 1-dimensional. So while you imagine your array like this:

    0,0 | 0,1 | 0,2 | 0,3
    ----+-----+-----+----
    1,0 | 1,1 | 1,2 | 1,3
    ----+-----+-----+----
    2,0 | 2,1 | 2,2 | 2,3
    

    Your computer stores it in memory as a single line:

    0,0 | 0,1 | 0,2 | 0,3 | 1,0 | 1,1 | 1,2 | 1,3 | 2,0 | 2,1 | 2,2 | 2,3
    

    In the 2nd example, you access the array by looping over the 2nd number first, i.e.:

    x[0][0] 
            x[0][1]
                    x[0][2]
                            x[0][3]
                                    x[1][0] etc...
    

    Meaning that you're hitting them all in order. Now look at the 1st version. You're doing:

    x[0][0]
                                    x[1][0]
                                                                    x[2][0]
            x[0][1]
                                            x[1][1] etc...
    

    Because of the way C laid out the 2-d array in memory, you're asking it to jump all over the place. But now for the kicker: Why does this matter? All memory accesses are the same, right?

    No: because of caches. Data from your memory gets brought over to the CPU in little chunks (called 'cache lines'), typically 64 bytes. If you have 4-byte integers, that means you're geting 16 consecutive integers in a neat little bundle. It's actually fairly slow to fetch these chunks of memory; your CPU can do a lot of work in the time it takes for a single cache line to load.

    Now look back at the order of accesses: The second example is (1) grabbing a chunk of 16 ints, (2) modifying all of them, (3) repeat 4000*4000/16 times. That's nice and fast, and the CPU always has something to work on.

    The first example is (1) grab a chunk of 16 ints, (2) modify only one of them, (3) repeat 4000*4000 times. That's going to require 16 times the number of "fetches" from memory. Your CPU will actually have to spend time sitting around waiting for that memory to show up, and while it's sitting around you're wasting valuable time.

    Important Note:

    Now that you have the answer, here's an interesting note: there's no inherent reason that your second example has to be the fast one. For instance, in Fortran, the first example would be fast and the second one slow. That's because instead of expanding things out into conceptual "rows" like C does, Fortran expands into "columns", i.e.:

    0,0 | 1,0 | 2,0 | 0,1 | 1,1 | 2,1 | 0,2 | 1,2 | 2,2 | 0,3 | 1,3 | 2,3
    

    The layout of C is called 'row-major' and Fortran's is called 'column-major'. As you can see, it's very important to know whether your programming language is row-major or column-major! Here's a link for more info: http://en.wikipedia.org/wiki/Row-major_order

    0 讨论(0)
提交回复
热议问题