Where does the x86-64\'s SSE instructions (vector instructions) outperform the normal instructions. Because what I\'m seeing is that the frequent loads and stores that are requi
Summarizing comments into an answer:
You have basically fallen into the same trap that catches most first-timers. Basically there are two problems in your example:
_mm_set_epi32()
._mm_set_epi32()
is a very expensive intrinsic. Although it's convenient to use, it doesn't compile to a single instruction. Some compilers (such as VS2010) can generate very poor performing code when using _mm_set_epi32()
.
Instead, since you are loading contiguous blocks of memory, you should use _mm_load_si128()
. That requires that the pointer is aligned to 16 bytes. If you can't guarantee this alignment, you can use _mm_loadu_si128()
- but with a performance penalty. Ideally, you should properly align your data so that don't need to resort to using _mm_loadu_si128()
.
The be truly efficient with SSE, you'll also want to maximize your computation/load-store ratio. A target that I shoot for is 3 - 4 arithmetic instructions per memory-access. This is a fairly high ratio. Typically you have to refactor the code or redesign the algorithm to increase it. Combining passes over the data is a common approach.
Loop unrolling is often necessary to maximize performance when you have large loop bodies with long dependency chains.
Some examples of SO questions that successfully use SSE to achieve speedup.