问题
I'm just doing some benchmarking and found out that fabsf()
is often like 10x slower than fabs()
. So I disassembled it and it turns out the double
version is using fabs
instruction, float
version is not. Can this be improved? This is faster, but not so much and I'm afraid it may not work, it's a little too lowlevel:
float mabs(float i)
{
(*reinterpret_cast<MUINT32*>(&i)) &= 0x7fffffff;
return i;
}
Edit: Sorry forgot about the compiler - I still use the good old VS2005, no special libs.
回答1:
You can easily test different possibilities using the code below. It essentially tests your bitfiddling against naive template abs, and std::abs
. Not surprisingly, naive template abs wins. Well, kind of surprisingly it wins. I'd expect std::abs
to be equally fast. Note that -O3
actually makes things slower (at least on coliru).
Coliru's host system shows these timings:
random number generation: 4240 ms
naive template abs: 190 ms
ugly bitfiddling abs: 241 ms
std::abs: 204 ms
::fabsf: 202 ms
And these timings for a Virtualbox VM running Arch with GCC 4.9 on a Core i7:
random number generation: 1453 ms
naive template abs: 73 ms
ugly bitfiddling abs: 97 ms
std::abs: 57 ms
::fabsf: 80 ms
And these timings on MSVS2013 (Windows 7 x64):
random number generation: 671 ms
naive template abs: 59 ms
ugly bitfiddling abs: 129 ms
std::abs: 109 ms
::fabsf: 109 ms
If I haven't made some blatantly obvious mistake in this benchmark code (don't shoot me over it, I wrote this up in about 2 minutes), I'd say just use std::abs
, or the template version if that turns out to be slightly faster for you.
The code:
#include <algorithm>
#include <cmath>
#include <cstdint>
#include <cstdlib>
#include <chrono>
#include <iostream>
#include <random>
#include <vector>
#include <math.h>
using Clock = std::chrono::high_resolution_clock;
using milliseconds = std::chrono::milliseconds;
template<typename T>
T abs_template(T t)
{
return t>0 ? t : -t;
}
float abs_ugly(float f)
{
(*reinterpret_cast<std::uint32_t*>(&f)) &= 0x7fffffff;
return f;
}
int main()
{
std::random_device rd;
std::mt19937 mersenne(rd());
std::uniform_real_distribution<> dist(-std::numeric_limits<float>::lowest(), std::numeric_limits<float>::max());
std::vector<float> v(100000000);
Clock::time_point t0 = Clock::now();
std::generate(std::begin(v), std::end(v), [&dist, &mersenne]() { return dist(mersenne); });
Clock::time_point trand = Clock::now();
volatile float temp;
for (float f : v)
temp = abs_template(f);
Clock::time_point ttemplate = Clock::now();
for (float f : v)
temp = abs_ugly(f);
Clock::time_point tugly = Clock::now();
for (float f : v)
temp = std::abs(f);
Clock::time_point tstd = Clock::now();
for (float f : v)
temp = ::fabsf(f);
Clock::time_point tfabsf = Clock::now();
milliseconds random_time = std::chrono::duration_cast<milliseconds>(trand - t0);
milliseconds template_time = std::chrono::duration_cast<milliseconds>(ttemplate - trand);
milliseconds ugly_time = std::chrono::duration_cast<milliseconds>(tugly - ttemplate);
milliseconds std_time = std::chrono::duration_cast<milliseconds>(tstd - tugly);
milliseconds c_time = std::chrono::duration_cast<milliseconds>(tfabsf - tstd);
std::cout << "random number generation: " << random_time.count() << " ms\n"
<< "naive template abs: " << template_time.count() << " ms\n"
<< "ugly bitfiddling abs: " << ugly_time.count() << " ms\n"
<< "std::abs: " << std_time.count() << " ms\n"
<< "::fabsf: " << c_time.count() << " ms\n";
}
Oh, and to answer your actual question: if the compiler can't generate more efficient code, I doubt there is a faster way save for micro-optimized assembly, especially for elementary operations such as this.
回答2:
There are many things at play here. First off, the x87 co-processor is deprecated in favor of SSE/AVX, so I'm surprised to read that your compiler still uses the fabs
instruction. It's quite possible that the others who posted benchmark answers on this question use a platform that supports SSE. Your results might be wildly different.
I'm not sure why your compiler uses a different logic for fabs
and fabsf
. It's totally possible to load a float
to the x87 stack and use the fabs
instruction on it just as easily. The problem with reproducing this by yourself, without compiler support, is that you can't integrate the operation into the compiler's normal optimizing pipeline: if you say "load this float, use the fabs
instruction, return this float to memory", then the compiler will do exactly that... and it may involve putting back to memory a float that was already ready to be processed, loading it back in, using the fabs
instruction, putting it back to memory, and loading it again to the x87 stack to resume the normal, optimizable pipeline. This would be four wasted load-store operations because it only needed to do fabs
.
In short, you are unlikely to beat integrated compiler support for floating-point operations. If you don't have this support, inline assembler might just make things even slower than they presumably already are. The fastest thing for you to do might even be to use the fabs
function instead of the fabsf
function on your floats.
For reference, modern compilers and modern platforms use the SSE instructions andps
(for floats) and andpd
(for doubles) to AND out the bit sign, very much like you're doing yourself, but dodging all the language semantics issues. They're both as fast. Modern compilers may also detect patterns like x < 0 ? -x : x
and produce the optimal andps
/andpd
instruction without the need for a compiler intrinsic.
回答3:
Did you try the std::abs
overload for float
? That would be the canonical C++ way.
Also as an aside, I should note that your bit-modifying version does violate the strict-aliasing rules (in addition to the more fundamental assumption that int
and float
have the same size) and as such would be undefined behavior.
来源:https://stackoverflow.com/questions/23474796/is-there-a-fast-fabsf-replacement-for-float-in-c