The problem can be described as follow.
Input
__m256d a, b, c, d
Output
__m256d s = {
I'm not aware of any instruction that lets you do that sort of permutation. AVX instructions typically operate such that the upper and lower 128 bits of the register are somewhat independent; there isn't much capability for intermingling values from the two halves. The best implementation I can think of would be based on the answer to this question:
__m128d horizontal_add_pd(__m256d x1, __m256d x2)
{
// calculate 4 two-element horizontal sums:
// lower 64 bits contain x1[0] + x1[1]
// next 64 bits contain x2[0] + x1[1]
// next 64 bits contain x1[2] + x1[3]
// next 64 bits contain x2[2] + x2[3]
__m256d sum = _mm256_hadd_pd(x1, x2);
// extract upper 128 bits of result
__m128d sum_high = _mm256_extractf128_pd(sum1, 1);
// add upper 128 bits of sum to its lower 128 bits
__m128d result = _mm_add_pd(sum_high, (__m128d) sum);
// lower 64 bits of result contain the sum of x1[0], x1[1], x1[2], x1[3]
// upper 64 bits of result contain the sum of x2[0], x2[1], x2[2], x2[3]
return result;
}
__m256d a, b, c, d;
__m128d res1 = horizontal_add_pd(a, b);
__m128d res2 = horizontal_add_pd(c, d);
// At this point:
// res1 contains a's horizontal sum in bits 0-63
// res1 contains b's horizontal sum in bits 64-127
// res2 contains c's horizontal sum in bits 0-63
// res2 contains d's horizontal sum in bits 64-127
// cast res1 to a __m256d, then insert res2 into the upper 128 bits of the result
__m256d sum = _mm256_insertf128_pd(_mm256_castpd128_pd256(res1), res2, 1);
// At this point:
// sum contains a's horizontal sum in bits 0-63
// sum contains b's horizontal sum in bits 64-127
// sum contains c's horizontal sum in bits 128-191
// sum contains d's horizontal sum in bits 192-255
Which should be what you want. The above should be doable in 7 total instructions (the cast shouldn't really do anything; it's just a note to the compiler to change the way it's treating the value in res1
), assuming that the short horizontal_add_pd()
function can be inlined by your compiler and you have enough registers available.
VHADD
instructions are meant to be followed by regular VADD
. The following code should give you what you want:
// {a[0]+a[1], b[0]+b[1], a[2]+a[3], b[2]+b[3]}
__m256d sumab = _mm256_hadd_pd(a, b);
// {c[0]+c[1], d[0]+d[1], c[2]+c[3], d[2]+d[3]}
__m256d sumcd = _mm256_hadd_pd(c, d);
// {a[0]+a[1], b[0]+b[1], c[2]+c[3], d[2]+d[3]}
__m256d blend = _mm256_blend_pd(sumab, sumcd, 0b1100);
// {a[2]+a[3], b[2]+b[3], c[0]+c[1], d[0]+d[1]}
__m256d perm = _mm256_permute2f128_pd(sumab, sumcd, 0x21);
__m256d sum = _mm256_add_pd(perm, blend);
This gives the result in 5 instructions. I hope I got the constants right.
The permutation that you proposed is certainly possible to accomplish, but it takes multiple instructions. Sorry that I'm not answering that part of your question.
Edit: I couldn't resist, here's the complete permutation. (Again, did my best to try to get the constants right.) You can see that swapping u[1]
and u[2]
is possible, just takes a bit of work. Crossing the 128bit barrier is difficult in the first gen. AVX. I also want to say that VADD
is preferable to VHADD
because VADD
has twice the throughput, even though it's doing the same number of additions.
// {x[0],x[1],x[2],x[3]}
__m256d x;
// {x[1],x[0],x[3],x[2]}
__m256d xswap = _mm256_permute_pd(x, 0b0101);
// {x[3],x[2],x[1],x[0]}
__m256d xflip128 = _mm256_permute2f128_pd(xswap, xswap, 0x01);
// {x[0],x[2],x[1],x[3]} -- not imposssible to swap x[1] and x[2]
__m256d xblend = _mm256_blend_pd(x, xflip128, 0b0110);
// repeat the same for y
// {y[0],y[2],y[1],y[3]}
__m256d yblend;
// {x[0],x[2],y[0],y[2]}
__m256d x02y02 = _mm256_permute2f128_pd(xblend, yblend, 0x20);
// {x[1],x[3],y[1],y[3]}
__m256d x13y13 = _mm256_permute2f128_pd(xblend, yblend, 0x31);