I am currently writing a vectorized version of the QR decomposition (linear system solver) using SSE and AVX intrinsics. One of the substeps requires to select the sign of a
Here's a version that I think is slightly better than the accepted answer if you target icc:
__m256d copysign_pd(__m256d from, __m256d to) {
__m256d const avx_sigbit = _mm256_set1_pd(-0.);
return _mm256_or_pd(_mm256_and_pd(avx_sigbit, from), _mm256_andnot_pd(avx_sigbit, to));
}
It uses _mm256_set1_pd
rather than an broadcast intrinsic. On clang and gcc this is mostly a wash, but on icc the broadcast version actually writes a constant to the stack and then broadcasts from it, which is ... terrible.
Godbolt showing AVX-512 code, adjust the -march=
to -march=skylake
to see AVX2 code.
Here's an untested AVX-512 version which uses vpterlogdq
directly, which compiles down to a single vpterlogd
instruction on icc and clang (gcc includes a separate broadcast):
__m512d copysign_pd_alt(__m512d from, __m512d to) {
const __m512i sigbit = _mm512_castpd_si512(_mm512_set1_pd(-0.));
return _mm512_castsi512_pd(_mm512_ternarylogic_epi64(_mm512_castpd_si512(from), _mm512_castpd_si512(to), sigbit, 0xE4));
}
You could make a 256-bit version of this for when AVX-512 is enabled but you're dealing with __m256*
vectors.
AVX versions for float
and double
:
#include <immintrin.h>
__m256 copysign_ps(__m256 from, __m256 to) {
constexpr float signbit = -0.f;
auto const avx_signbit = _mm256_broadcast_ss(&signbit);
return _mm256_or_ps(_mm256_and_ps(avx_signbit, from), _mm256_andnot_ps(avx_signbit, to)); // (avx_signbit & from) | (~avx_signbit & to)
}
__m256d copysign_pd(__m256d from, __m256d to) {
constexpr double signbit = -0.;
auto const avx_signbit = _mm256_broadcast_sd(&signbit);
return _mm256_or_pd(_mm256_and_pd(avx_signbit, from), _mm256_andnot_pd(avx_signbit, to)); // (avx_signbit & from) | (~avx_signbit & to)
}
assembly
The Intel Intrinsics Guide
With AVX2 avx_signbit
can be generated with no constants:
__m256 copysign2_ps(__m256 from, __m256 to) {
auto a = _mm256_castps_si256(from);
auto avx_signbit = _mm256_castsi256_ps(_mm256_slli_epi32(_mm256_cmpeq_epi32(a, a), 31));
return _mm256_or_ps(_mm256_and_ps(avx_signbit, from), _mm256_andnot_ps(avx_signbit, to)); // (avx_signbit & from) | (~avx_signbit & to)
}
__m256d copysign2_pd(__m256d from, __m256d to) {
auto a = _mm256_castpd_si256(from);
auto avx_signbit = _mm256_castsi256_pd(_mm256_slli_epi64(_mm256_cmpeq_epi64(a, a), 63));
return _mm256_or_pd(_mm256_and_pd(avx_signbit, from), _mm256_andnot_pd(avx_signbit, to)); // (avx_signbit & from) | (~avx_signbit & to)
}
Still though, both clang
and gcc
calculate avx_signbit
at compile time and replace it with constants loaded from .rodata
section, which is, IMO, sub-optimal.