问题
In C/C++, why should one use abs()
or fabs()
to find the absolute value of a variable without using the following code?
int absoluteValue = value < 0 ? -value : value;
Does it have something to do with fewer instructions at lower level?
回答1:
The "conditional abs" you propose is not equivalent to std::abs
(or fabs
) for floating point numbers, see e.g.
#include <iostream>
#include <cmath>
int main () {
double d = -0.0;
double a = d < 0 ? -d : d;
std::cout << d << ' ' << a << ' ' << std::abs(d);
}
output:
-0 -0 0
Given -0.0
and 0.0
represent the same real number '0', this difference may or may not matter, depending on how the result is used. However, the abs function as specified by IEEE754 mandates the signbit of the result to be 0, which would forbid the result -0.0
. I personally think anything used to calculate some "absolute value" should match this behavior.
For integers, both variants will be equivalent both in runtime and behavior. (Live example)
But as std::abs
(or the fitting C equivalents) are known to be correct and easier to read, you should just always prefer those.
回答2:
The first thing that comes to mind is readability.
Compare these two lines of codes:
int x = something, y = something, z = something;
// Compare
int absall = (x > 0 ? x : -x) + (y > 0 ? y : -y) + (z > 0 ? z : -z);
int absall = abs(x) + abs(y) + abs(z);
回答3:
The compiler will most likely do the same thing for both at the bottom layer - at least a modern competent compiler.
However, at least for floating point, you'll end up writing a few dozen lines if you want to handle all the special cases of infinity, not-a-number (NaN), negative zero and so on.
As well as it's easier to read that abs
is taking the absolute value than reading that if it's less than zero, negate it.
If the compiler is "stupid", it may well end up doing worse code for a = (a < 0)?-a:a
, because it forces an if
(even if it's hidden), and that could well be worse than the built-in floating point abs instruction on that processor (aside from complexity of special values)
Both Clang (6.0-pre-release) and gcc (4.9.2) generates WORSE code for the second case.
I wrote this little sample:
#include <cmath>
#include <cstdlib>
extern int intval;
extern float floatval;
void func1()
{
int a = std::abs(intval);
float f = std::abs(floatval);
intval = a;
floatval = f;
}
void func2()
{
int a = intval < 0?-intval:intval;
float f = floatval < 0?-floatval:floatval;
intval = a;
floatval = f;
}
clang makes this code for func1:
_Z5func1v: # @_Z5func1v
movl intval(%rip), %eax
movl %eax, %ecx
negl %ecx
cmovll %eax, %ecx
movss floatval(%rip), %xmm0 # xmm0 = mem[0],zero,zero,zero
andps .LCPI0_0(%rip), %xmm0
movl %ecx, intval(%rip)
movss %xmm0, floatval(%rip)
retq
_Z5func2v: # @_Z5func2v
movl intval(%rip), %eax
movl %eax, %ecx
negl %ecx
cmovll %eax, %ecx
movss floatval(%rip), %xmm0
movaps .LCPI1_0(%rip), %xmm1
xorps %xmm0, %xmm1
xorps %xmm2, %xmm2
movaps %xmm0, %xmm3
cmpltss %xmm2, %xmm3
movaps %xmm3, %xmm2
andnps %xmm0, %xmm2
andps %xmm1, %xmm3
orps %xmm2, %xmm3
movl %ecx, intval(%rip)
movss %xmm3, floatval(%rip)
retq
g++ func1:
_Z5func1v:
movss .LC0(%rip), %xmm1
movl intval(%rip), %eax
movss floatval(%rip), %xmm0
andps %xmm1, %xmm0
sarl $31, %eax
xorl %eax, intval(%rip)
subl %eax, intval(%rip)
movss %xmm0, floatval(%rip)
ret
g++ func2:
_Z5func2v:
movl intval(%rip), %eax
movl intval(%rip), %edx
pxor %xmm1, %xmm1
movss floatval(%rip), %xmm0
sarl $31, %eax
xorl %eax, %edx
subl %eax, %edx
ucomiss %xmm0, %xmm1
jbe .L3
movss .LC3(%rip), %xmm1
xorps %xmm1, %xmm0
.L3:
movl %edx, intval(%rip)
movss %xmm0, floatval(%rip)
ret
Note that both cases are notably more complex in the second form, and in the gcc case, it uses a branch. Clang uses more instructions, but no branch. I'm not sure which is faster on which processor models, but quite clearly more instructions is rarely better.
回答4:
Why use abs() or fabs() instead of conditional negation?
Various reasons have already been stated, yet consider conditional code advantages as abs(INT_MIN)
should be avoided.
There is a good reason to use the conditional code in lieu of abs()
when the negative absolute value of an integer is sought
// Negative absolute value
int nabs(int value) {
return -abs(value); // abs(INT_MIN) is undefined behavior.
}
int nabs(int value) {
return value < 0 ? value : -value; // well defined for all `int`
}
When a positive absolute function is needed and value == INT_MIN
is a real possibility, abs()
, for all its clarity and speed fails a corner case. Various alternatives
unsigned absoluteValue = value < 0 ? (0u - value) : (0u + value);
回答5:
There might be a more-efficient low-level implementation than a conditional branch, on a given architecture. For example, the CPU might have an abs
instruction, or a way to extract the sign bit without the overhead of a branch. Supposing an arithmetic right shift can fill a register r with -1 if the number is negative, or 0 if positive, abs x
could become (x+r)^r
(and seeing
Mats Petersson's answer, g++ actually does this on x86).
Other answers have gone over the situation for IEEE floating-point.
Trying to tell the compiler to perform a conditional branch instead of trusting the library is probably premature optimization.
回答6:
Consider that you could feed a complicated expression into abs()
. If you code it with expr > 0 ? expr : -expr
, you have to repeat the whole expression three times, and it will be evaluated two times.
In addition, the two result (before and after the colon) might turn out to be of different types (like signed int
/ unsigned int
), which disables the use in a return statement.
Of course, you could add a temporary variable , but that solves only parts of it, and is not better in any way either.
回答7:
...and would you make it into a macro, you can have multiple evaluations that you may not want (side efffects). Consider:
#define ABS(a) ((a)<0?-(a):(a))
and use:
f= 5.0;
f=ABS(f=fmul(f,b));
which would expand to
f=((f=fmul(f,b)<0?-(f=fmul(f,b)):(f=fmul(f,b)));
Function calls won't have this unintended side-effects.
回答8:
Assuming that the compiler won't be able to determine that both abs() and conditional negation are attempting to achieve the same goal, conditional negation compiles to a compare instruction, a conditional jump instruction, and a move instruction, whereas abs() either compiles to an actual absolute value instruction, in instruction sets that support such a thing, or a bitwise and that keeps everthing the same, except for the sign bit. Every instruction above is typically 1 cycle, so using abs() is likely to be at least as fast, or faster than conditional negation (since the compiler might still recognize that you are attempting to calculate an absolute value when using the conditional negation, and generate an absolute value instruction anyway). Even if there is no change in the compiled code, abs() is still more readable than conditional negation.
回答9:
The intent behind abs() is "(unconditionally) set the sign of this number to positive". Even if that had to be implemented as a conditional based on the current state of the number, it's probably more useful to be able to think of it as a simple "do this", rather than a more complex "if… this… that".
来源:https://stackoverflow.com/questions/48608993/why-use-abs-or-fabs-instead-of-conditional-negation