问题
This question was originally posed for SSE2 here. Since every single algorithm overlapped with ARMv7a+NEON's support for the same operations, the question was updated to include the ARMv7+NEON versions. At the request of a commenter, this question is asked here to show that it is indeed a separate topic and to provide alternative solutions that might be more practical for ARMv7+NEON. The net purpose of these questions is to find ideal implementations for consideration into WebAssembly SIMD.
回答1:
From the original post, the best x64/SSE2 algorithm implemented on ARMv7+NEON works as follows:
(a[32:63] === b[32:63]) & (b[0:63] - a[0:63])
yields a mask of 0xFFFFFFFF.........
for every case where the top 32 bits are equal and a[0:31] > b[0:31]
. In all other cases such as when the top 32 bits are not equal or a[0:31]< b[0:31]
, it returns 0x0
. This has the effect of taking the bottom 32bits of each integer and propagating them into the upper 32bits as a mask if the top 32 bits are inconsequential, and the lower 32bits are significant. For the remaining cases, it takes the comparison of the top 32 bits and ORs them together. As an example if a[32:63] > b[32:63], then a is definitely greater than b, regardless of the least significant bits. Finally, it swizzles/shuffles/transposes the upper 32s of each 64bit mask to the lower 32bits to produce a full 64bit mask.
An illustrative example implementation is in this Godbolt.
回答2:
Signed 64-bit saturating subtract allows for a better NEON solution than what is possible with SSE2.
(assumes vqsubq_s64
is a 64-bit equivalent to _mm_subs_epi16
...)
int64_t cmpgt(int64_t a, int64_t b) {
int64_t r = saturating_subtract_s64(b, a);
return (r & (~a | b)) >> 63;
}
untested translation:
uint64x2_t pcmpgtq_armv7 (int64x2_t a, int64x2_t b) {
int64x2_t r = vandq_s64(vqsubq_s64(b, a), vornq_s64(b, a));
return vreinterpretq_u64_s64(vshrq_n_s64(r, 63));
}
It has 4 instructions. The subs
and or-not
instructions could run in parallel.
The free chapter of Hacker's Delight gives the following formulas:
// return (a > b) ? -1LL : 0LL;
int64_t cmpgt(int64_t a, int64_t b) {
return ((b & ~a) | ((b - a) & ~(b ^ a))) >> 63;
}
int64_t cmpgt(int64_t a, int64_t b) {
return ((b - a) ^ ((b ^ a) & ((b - a) ^ b))) >> 63;
}
来源:https://stackoverflow.com/questions/65191073/what-is-the-most-efficient-way-to-support-cmgt-with-64bit-signed-comparisons-on