问题
Consider a randomly generated __m256i
vector. Is there a faster precise way to convert them into __m256
vector of floats between 0
(inclusively) and 1
(exclusively) than division by float(1ull<<32)
?
Here's what I have tried so far, where iRand
is the input and ans
is the output:
const __m256 fRand = _mm256_cvtepi32_ps(iRand);
const __m256 normalized = _mm256_div_ps(fRand, _mm256_set1_ps(float(1ull<<32)));
const __m256 ans = _mm256_add_ps(normalized, _mm256_set1_ps(0.5f));
回答1:
The version below should be faster, compared to your initial version that uses _mm256_div_ps
vdivps
is quite slow, e.g. on my Haswell Xeon it’s 18-21 cycles latency, 14 cycles throughput. Newer CPUs perform better BTW, it’s 11/5 on Skylake, 10/6 on Ryzen.
As said in the comments, the performance is fixable by replacing divide with multiply and further improved with FMA. The problem with the approach is quality of distribution. If you’ll try to get these numbers in your output interval by rounding mode or clipping, you’ll introduce peaks in probability distribution of the output numbers.
My implementation is not ideal either, it doesn’t output all possible values in the output interval, skips many representable floats, especially near 0. But at least the distribution is very even.
__m256 __vectorcall randomFloats( __m256i randomBits )
{
// Convert to random float bits
__m256 result = _mm256_castsi256_ps( randomBits );
// Zero out exponent bits, leave random bits in mantissa.
// BTW since the mask value is constexpr, we don't actually need AVX2 instructions for this, it's just easier to code with set1_epi32.
const __m256 mantissaMask = _mm256_castsi256_ps( _mm256_set1_epi32( 0x007FFFFF ) );
result = _mm256_and_ps( result, mantissaMask );
// Set sign + exponent bits to that of 1.0, which is sign=0, exponent=2^0.
const __m256 one = _mm256_set1_ps( 1.0f );
result = _mm256_or_ps( result, one );
// Subtract 1.0. The above algorithm generates floats in range [1..2).
// Can't use bit tricks to generate floats in [0..1) because it would cause them to be distributed very unevenly.
return _mm256_sub_ps( result, one );
}
Update: if you want better precision, use the following version. But it’s no longer “the fastest”.
__m256 __vectorcall randomFloats_32( __m256i randomBits )
{
// Convert to random float bits
__m256 result = _mm256_castsi256_ps( randomBits );
// Zero out exponent bits, leave random bits in mantissa.
const __m256 mantissaMask = _mm256_castsi256_ps( _mm256_set1_epi32( 0x007FFFFF ) );
result = _mm256_and_ps( result, mantissaMask );
// Set sign + exponent bits to that of 1.0, which is sign=0, exponent = 2^0.
const __m256 one = _mm256_set1_ps( 1.0f );
result = _mm256_or_ps( result, one );
// Subtract 1.0. The above algorithm generates floats in range [1..2).
result = _mm256_sub_ps( result, one );
// Use 9 unused random bits to add extra randomness to the lower bits of the values.
// This increases precision to 2^-32, however most floats in the range can't store that many bits, fmadd will only add them for small enough values.
// If you want uniformly distributed floats with 2^-24 precision, replace the second argument in the following line with _mm256_set1_epi32( 0x80000000 ).
// In this case you don't need to set rounding mode bits in MXCSR.
__m256i extraBits = _mm256_and_si256( randomBits, _mm256_castps_si256( mantissaMask ) );
extraBits = _mm256_srli_epi32( extraBits, 9 );
__m256 extra = _mm256_castsi256_ps( extraBits );
extra = _mm256_or_ps( extra, one );
extra = _mm256_sub_ps( extra, one );
_MM_SET_ROUNDING_MODE( _MM_ROUND_DOWN );
constexpr float mul = 0x1p-23f; // The initial part of the algorithm has generated uniform distribution with the step 2^-23.
return _mm256_fmadd_ps( extra, _mm256_set1_ps( mul ), result );
}
回答2:
First, no division, replace it with multiplication. While @Soonts might be good enough for you, I could only note due to the use of mapping to [1...2) interval, it produces uniform dyadic rationals of the form k/2−23, which is half of what could be generated. I prefer method from S.Vigna (at the bottom), with all dyadic rationals of the form k/2−24 being equally likely.
Code, VC++2019, x64, Win10, Intel i7 Skylake
#include <random>
#include "immintrin.h"
auto p256_dec_u32(__m256i in) -> void {
alignas(alignof(__m256i)) uint32_t v[8];
_mm256_store_si256((__m256i*)v, in);
printf("v8_u32: %u %u %u %u %u %u %u %u\n", v[0], v[1], v[2], v[3], v[4], v[5], v[6], v[7]);
}
auto p256_dec_f32(__m256 in) -> void {
alignas(alignof(__m256)) float v[8];
_mm256_store_ps(v, in);
printf("v8_float: %e %e %e %e %e %e %e %e\n", v[0], v[1], v[2], v[3], v[4], v[5], v[6], v[7]);
}
auto main() -> int {
const float c = 0x1.0p-24f; // or (1.0f / (uint32_t(1) << 24));
const int N = 1000000;
std::mt19937 rng{ 987654321ULL };
__m256 sum = _mm256_set1_ps(0.0f);
for (int k = 0; k != N; ++k) {
alignas(alignof(__m256i)) uint32_t rnd[8] = { rng(), rng(), rng(), rng(), rng(), rng(), rng(), rng() };
__m256i r = _mm256_load_si256((__m256i*)rnd);
__m256 q = _mm256_mul_ps(_mm256_cvtepi32_ps(_mm256_srli_epi32(r, 8)), _mm256_set1_ps(c));
sum = _mm256_add_ps(sum, q);
}
sum = _mm256_div_ps(sum, _mm256_set1_ps((float)N)); // computing average
p256_dec_f32(sum);
return 0;
}
with output
5.002970e-01 4.997833e-01 4.996118e-01 5.004955e-01 5.002163e-01 4.997193e-01 4.996586e-01 5.001499e-01
来源:https://stackoverflow.com/questions/54869672/fastest-precise-way-to-convert-a-vector-of-integers-into-floats-between-0-and-1