In C/C++, why should one use abs()
or fabs()
to find the absolute value of a variable without using the following code?
int absoluteVal
Assuming that the compiler won't be able to determine that both abs() and conditional negation are attempting to achieve the same goal, conditional negation compiles to a compare instruction, a conditional jump instruction, and a move instruction, whereas abs() either compiles to an actual absolute value instruction, in instruction sets that support such a thing, or a bitwise and that keeps everthing the same, except for the sign bit. Every instruction above is typically 1 cycle, so using abs() is likely to be at least as fast, or faster than conditional negation (since the compiler might still recognize that you are attempting to calculate an absolute value when using the conditional negation, and generate an absolute value instruction anyway). Even if there is no change in the compiled code, abs() is still more readable than conditional negation.
Consider that you could feed a complicated expression into abs()
. If you code it with expr > 0 ? expr : -expr
, you have to repeat the whole expression three times, and it will be evaluated two times.
In addition, the two result (before and after the colon) might turn out to be of different types (like signed int
/ unsigned int
), which disables the use in a return statement.
Of course, you could add a temporary variable , but that solves only parts of it, and is not better in any way either.
Why use abs() or fabs() instead of conditional negation?
Various reasons have already been stated, yet consider conditional code advantages as abs(INT_MIN)
should be avoided.
There is a good reason to use the conditional code in lieu of abs()
when the negative absolute value of an integer is sought
// Negative absolute value
int nabs(int value) {
return -abs(value); // abs(INT_MIN) is undefined behavior.
}
int nabs(int value) {
return value < 0 ? value : -value; // well defined for all `int`
}
When a positive absolute function is needed and value == INT_MIN
is a real possibility, abs()
, for all its clarity and speed fails a corner case. Various alternatives
unsigned absoluteValue = value < 0 ? (0u - value) : (0u + value);
The first thing that comes to mind is readability.
Compare these two lines of codes:
int x = something, y = something, z = something;
// Compare
int absall = (x > 0 ? x : -x) + (y > 0 ? y : -y) + (z > 0 ? z : -z);
int absall = abs(x) + abs(y) + abs(z);
The "conditional abs" you propose is not equivalent to std::abs
(or fabs
) for floating point numbers, see e.g.
#include <iostream>
#include <cmath>
int main () {
double d = -0.0;
double a = d < 0 ? -d : d;
std::cout << d << ' ' << a << ' ' << std::abs(d);
}
output:
-0 -0 0
Given -0.0
and 0.0
represent the same real number '0', this difference may or may not matter, depending on how the result is used. However, the abs function as specified by IEEE754 mandates the signbit of the result to be 0, which would forbid the result -0.0
. I personally think anything used to calculate some "absolute value" should match this behavior.
For integers, both variants will be equivalent both in runtime and behavior. (Live example)
But as std::abs
(or the fitting C equivalents) are known to be correct and easier to read, you should just always prefer those.
There might be a more-efficient low-level implementation than a conditional branch, on a given architecture. For example, the CPU might have an abs
instruction, or a way to extract the sign bit without the overhead of a branch. Supposing an arithmetic right shift can fill a register r with -1 if the number is negative, or 0 if positive, abs x
could become (x+r)^r
(and seeing
Mats Petersson's answer, g++ actually does this on x86).
Other answers have gone over the situation for IEEE floating-point.
Trying to tell the compiler to perform a conditional branch instead of trusting the library is probably premature optimization.