Angles in my program are expressed in 0 to 2pi. I want a way to add two angles and have it wrap around the 2pi to 0 if the result is higher than 2pi. Or if I subtracted an a
Out of curiosity, I experimented with three algorithms in other answers, timing them.
When the values to be normalized are close to the range 0..2π, then the while
algorithm is quickest; the algorithm using fmod()
is slowest, and the algorithm using floor()
is in between.
When the values to be normalized are not close to the range 0..2π, then the while
algorithm is slowest, the algorithm using floor()
is quickest, and the algorithm using fmod()
is in between.
So, I conclude that:
while
algorithm is the one to use.floor()
algorithm is the one to use.r1 = while
, r2 = fmod()
, r3 = floor()
Near Normal Far From Normal
r1 0.000020 r1 0.000456
r2 0.000078 r2 0.000085
r3 0.000058 r3 0.000065
r1 0.000032 r1 0.000406
r2 0.000085 r2 0.000083
r3 0.000057 r3 0.000063
r1 0.000033 r1 0.000406
r2 0.000085 r2 0.000085
r3 0.000058 r3 0.000065
r1 0.000033 r1 0.000407
r2 0.000086 r2 0.000083
r3 0.000058 r3 0.000063
The test code used the value shown for PI
. The C standard does not define a value for π, but POSIX does define M_PI
and a number of related constants, so I could have written my code using M_PI
instead of PI
.
#include <math.h>
#include <stdio.h>
#include "timer.h"
static const double PI = 3.14159265358979323844;
static double r1(double angle)
{
while (angle > 2.0 * PI)
angle -= 2.0 * PI;
while (angle < 0)
angle += 2.0 * PI;
return angle;
}
static double r2(double angle)
{
angle = fmod(angle, 2.0 * PI);
if (angle < 0.0)
angle += 2.0 * PI;
return angle;
}
static double r3(double angle)
{
double twoPi = 2.0 * PI;
return angle - twoPi * floor( angle / twoPi );
}
static void tester(const char * tag, double (*test)(double), int noisy)
{
typedef struct TestSet { double start, end, increment; } TestSet;
static const TestSet tests[] =
{
{ -6.0 * PI, +6.0 * PI, 0.01 },
// { -600.0 * PI, +600.0 * PI, 3.00 },
};
enum { NUM_TESTS = sizeof(tests) / sizeof(tests[0]) };
Clock clk;
clk_init(&clk);
clk_start(&clk);
for (int i = 0; i < NUM_TESTS; i++)
{
for (double angle = tests[i].start; angle < tests[i].end; angle += tests[i].increment)
{
double result = (*test)(angle);
if (noisy)
printf("%12.8f : %12.8f\n", angle, result);
}
}
clk_stop(&clk);
char buffer[32];
printf("%s %s\n", tag, clk_elapsed_us(&clk, buffer, sizeof(buffer)));
}
int main(void)
{
tester("r1", r1, 0);
tester("r2", r2, 0);
tester("r3", r3, 0);
tester("r1", r1, 0);
tester("r2", r2, 0);
tester("r3", r3, 0);
tester("r1", r1, 0);
tester("r2", r2, 0);
tester("r3", r3, 0);
tester("r1", r1, 0);
tester("r2", r2, 0);
tester("r3", r3, 0);
return(0);
}
Testing on Mac OS X 10.7.4 with the standard /usr/bin/gcc
(i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00)
). The 'close to normalized' test code is shown; the 'far from normalized' test data was created by uncommenting the //
comment in the test data.
Timing with a home-built GCC 4.7.1 is similar (the same conclusions would be drawn):
Near Normal Far From Normal
r1 0.000029 r1 0.000321
r2 0.000075 r2 0.000094
r3 0.000054 r3 0.000065
r1 0.000028 r1 0.000327
r2 0.000075 r2 0.000096
r3 0.000053 r3 0.000068
r1 0.000025 r1 0.000327
r2 0.000075 r2 0.000101
r3 0.000053 r3 0.000070
r1 0.000028 r1 0.000332
r2 0.000076 r2 0.000099
r3 0.000050 r3 0.000065