Many algorithms require to compute (-1)^n
(both integer), usually as a factor in a series. That is, a factor that is -1
for odd n and 1
fo
Well if we are performing the calculation in a series, why not handle the calculation in a positive loop and a negative loop, skipping the evaluation completely?
The Taylor series expansion to approximate the natural log of (1+x) is a perfect example of this type of problem. Each term has (-1)^(n+1), or (1)^(n-1). There is no need to calculate this factor. You can "slice" the problem by either executing 1 loop for every two terms, or two loops, one for the odd terms and one for the even terms.
Of course, since the calculation, by its nature, is one over the domain of real numbers, you will be using a floating point processor to evaluate the individual terms anyway. Once you have decided to do that, you should just use the library implementation for the natural logarithm. But if for some reason, you decide not to, it will certainly be faster, but not by much, not to waste cycles calculating the value of -1 to the nth power.
Perhaps each can even be done in separate threads. Maybe the problem can be vectorized, even.
Usually you don't actually calculate (-1)^n
, instead you track the current sign (as a number being either -1
or 1
) and flip it every operation (sign = -sign
), do this as you handle your n
in order and you will get the same result.
EDIT: Note that part of the reason I recommend this is because there is rarely actually semantic value is the representation (-1)^n
it is merely a convenient method of flipping the sign between iterations.
If it's speed you need, here goes ...
int inline minus_1_pow(int n) {
static const int retvals[] {1, -1};
return retvals[n&1];
}
The Visual C++ compiler with optimization turned to 11 compiles this down to two machine instructions, neither of which is a branch. It optimizes-away the retvals array also, so no cache misses.
What about
(1 - (n%2)) - (n%2)
n%2
most likely will be computed only once
UPDATE
Actually, simplest and most correct way would be using table
const int res[] {-1, 1, -1};
return res[n%2 + 1];
You can use (n & 1)
instead of n % 2
and << 1
instead of * 2
if you want to be super-pedantic, er I mean optimized.
So the fastest way to compute in an 8086 processor is:
1 - ((n & 1) << 1)
I just want to clarify where this answer is coming from. The original poster alfC did an excellent job of posting a lot of different ways to compute (-1)^n some being faster than others.
Nowadays with processors being as fast as they are and optimizing compilers being as good as they are we usually value readability over the slight (even negligible) improvements from shaving a few CPU cycles from an operation.
There was a time when one pass compilers ruled the earth and MUL operations were new and decadent; in those days a power of 2 operation was an invitation for gratuitous optimization.
First of all, the fastest isOdd test I do know (in an inline method)
/**
* Return true if the value is odd
* @value the value to check
*/
inline bool isOdd(int value)
{
return (value & 1);
}
Then make use of this test to return -1 if odd, 1 otherwise (which is the actual output of (-1)^N )
/**
* Return the computation of (-1)^N
* @n the N factor
*/
inline int minusOneToN(int n)
{
return isOdd(n)?-1:1;
}
Last as suggested @Guvante, you can spare a multiplication just flipping the sign of a value (avoiding using the minusOneToN function)
/**
* Example of the general usage. Avoids a useless multiplication
* @value The value to flip if it is odd
*/
inline int flipSignIfOdd(int value)
{
return isOdd(value)?-value:value;
}