I am making some matrix multiplication benchmarking, as previously mentioned in Why is MATLAB so fast in matrix multiplication?
Now I\'ve got another issue, when mu
This probably has do with conflicts in your L2 cache.
Cache misses on matice1 are not the problem because they are accessed sequentially. However for matice2 if a full column fits in L2 (i.e when you access matice2[0, 0], matice2[1, 0], matice2[2, 0] ... etc, nothing gets evicted) than there is no problem with cache misses with matice2 either.
Now to go deeper in how caches works, if byte address of your variable is X, than the cache line for it would be (X >> 6) & (L - 1). Where L is total number of cache lines in your cache. L is always power of 2. The six comes from fact that 2^6 == 64 bytes is standard size of cache line.
Now what does this mean? Well it means that if I have address X and address Y and (X >> 6) - (Y >> 6) is divisible by L (i.e. some large power of 2), they will be stored in the same cacheline.
Now to go back to your problem what is the difference between 2048 and 2049,
when 2048 is your size:
if you take &matice2[x, k] and &matice2[y, k] the difference (&matice2[x, k] >> 6) - (&matice2[y,k] >> 6) will be divisible by 2048 * 4 (size of float). So a large power of 2.
Thus depending on size of your L2 you will have a lot of cache line conflicts, and only utilize small portion of your L2 to store a column, thus you wont actually be able to store full column in your cache, thus you will get bad performance.
When size is 2049, then the difference is 2049 * 4 which is not power of 2 thus you will have less conflicts and your column will safely fit into your cache.
Now to test this theory there are couple things you can do:
Allocate your array matice2 array like this matice2 [razmor, 4096], and run with razmor = 1024, 1025 or any size, and you should see very bad performance compared to what you had before. This is because you forcefully align all columns to conflict with each other.
Then try matice2 [razmor, 4097] and run it with any size and you should see much better performance.
Or cache thrashing, if I can coin a term.
Caches work by indexing with low order bits and tagging with high order bits.
Imaging that your cache has 4 words and your matrix is 4 x 4. When a column is accessed and the row is any power of two in length, then each column element in memory will map to the same cache element.
A power-of-two-plus-one is actually about optimum for this problem. Each new column element will map to the next cache slot exactly as if accessing by row.
In real life, a tag covers multiple sequentially increasing addresses which will cache several adjacent elements in a row. By offsetting the bucket that each new row maps to, traversing the column doesn't replace the previous entry. When the next column is traversed, the entire cache will be filled with different rows and each row section that fit into the cache will hit for several columns.
Since the cache is vastly faster than DRAM (mostly by virtue of being on-chip) hit rate is everything.
Probably a caching effect. With matrix dimensions that are large powers of two, and a cache size that is also a power of two, you can end up only using a small fraction of your L1 cache, slowing things down a lot. Naive matrix multiplication is usually constrained by the need to fetch data into the cache. Optimized algorithms using tiling (or cache-oblivious algorithms) focus on making better use of L1 cache.
If you time other pairs (2^n-1,2^n) I expect you'll see similar effects.
To explain more fully, in the inner loop, where you access matice2[m,k], it's likely that matice2[m,k] and matice2[m+1,k] are offset from each other by 2048*sizeof(float) and thus map to the same index in the L1 cache. With an N-way associative cache you will have typically have 1-8 cache locations for all of these. Thus almost all of those accesses will trigger an L1 cache eviction, and fetching of data from a slower cache or main memory.
You appear to have hit a cache size limit, or perhaps have some problems of repeatability in your timings.
Whatever the issue is, you simply should not write matrix multiplication yourself in C# and instead use an optimized version of the BLAS. That size of matrix should be multiplied in under a second on any modern machine.
Effectively utilizing the cache hierarchy is very important. You need to make sure that multidimensional arrays have data in a nice arrangement, which can be accomplished by tiling. To do this you'll need to store the 2D array as a 1D array together with an indexing mechanism. The problem with the traditional method is that although two adjacent array elements that are in the same row are next to each other in memory, two adjacent elements in the same column will be separated by W elements in memory, where W is the number of columns. Tiling can make as much as a factor-of-ten performance difference.
I suspect it is the result of something called "Sequential Flooding". What this is is that you are trying to loop through the list of objects that is slightly larger than the cache size, thus every single request to a the list (array) must be done from the ram, and you will not get a single cache hit.
In your case, you are looping through your arrays 2048 indexes 2048 times, but you only have space for 2047 (possibly due to some overhead from the array structure), so each time you acces an array pos, it needs to get this array pos from ram. It is then stored in the cache, but right before it is used again, it is dumped. So the cache is essentially useless, leading to a much longer execution time.