Which is the fastest way to implement an operation that returns the absolute value of a number?
x=root(x²)
or
if !isPositiv
For completeness, if you are dealing with floating point numbers, you can always do something like n * sign(n)
, where sign
is a function that returns +1 if the number is positive, -1 if negative. In C this would be something like copysign(1.0, n)
or (n > 0) - (n < 0)
.
Most machines use IEEE 754 as their floating point format these days, so you can clear the sign bit directly:
float fabs(float x) {
char *c = &x;
c[0] &= 7;
return *(float *)c;
}
Given that the abs
function likely does this exact thing, your best bet is to use it when available. If you are lucky the function will be a couple of instructions, and will be inlined.
For a list of negative numbers:
if you have zero stored in memory, simply use 0 - x
, where x
is the negative number.
Or if you do not have zero stored in memory:
x-x-x
, where x
is the negative number.
Or, with brackets for clarity:
(x) - (x) - (x)
=> (-n) - (-n) - (-n)
, where x = -n
i.e. subtract the negative number from itself to get zero, then subtract it from zero.
What is faster is very dependent on what compiler and what CPU you're targeting. On most CPUs and all compilers x = (x>=0)? x:-x; is fastest way to get absolute value, but in fact, often standard functions already offer this solution (e.g. fabs()). It is compiled into compare followed by conditional assignment instruction(CMOV), not into conditional jump. Some platforms lack of that instruction though. Although, Intel (but not Microsoft or GCC) compiler would automatically convert if() into conditional assignment, and even would try optimize cycles (if possible).
Branching code in general is slower than conditional assignment, if CPU uses statistical prediction. if() might be slower in average if operation gets repeated multiple times and result of condition is constantly changing. CPUs like Intel, would start to calculate both branches, and would drop the invalid one, In case of large if() bodies or large number of cycles that might be critical.
sqr() and sqrt() on modern Intel CPUs are single built-in instruction and aren't slow, but they are imprecise, and loading registers would take time as well.
Related question: Why is a CPU branch instruction slow?
Most likely, professor wanted student to do research on this matter, it's semi-provocative question\task that would do only good, if student would learn think independently and look for additional sources.
The modulo operation is used to find a remainder, you mean absolute value. I modified the question because it should be if !pos(x) then x = x*-1. (not was missing)
I wouldn't worry about the efficiency of an if statement. Instead focus on the readability of your code. If you identify that there is an efficiency problem, then focus on profiling your code to find real bottlenecks.
If you want to keep an eye out for efficiency while you code, you should only worry about the big-O complexity of your algorithms.
If statements are very efficient, it evaluates whatever expression and then simply changes the program counter based on that condition. The program counter stores the address of the next instruction to be executed.
Mulitplication by -1 and checking if a value is greater than 0 both can be reduced to a single assembly instruction.
Finding the root of a number and squaring that number first is definitely more operations than the if with a negation.
If you are simply comparing the absolute values of two numbers (e.g. you don't need the absolute value of either after the comparison) then just square both values to make both positive (remove the sign of each value), the larger square will be greater than the smaller square.
For completeness, here's a way to do it for IEEE floats on x86 systems in C++:
*(reinterpret_cast<uint32_t*>(&foo)) &= 0xffffffff >> 1;