I am writing a function for getting the average of the clocks it takes to call a specific void (*)(void)
aka void -> void
function a specific number
The average of the first n elements is
SUM
Average = ---
n
The next element Mi is
(SUM + Mi)
Average2 = ----------
n + 1
So given the current average, it is possible to find the next average with the new reading.
(Average * n + Mi )
Average2 = -------------------
n + 1
This can then be changed to an equation which doesn't increase
n Mi
Average2 = Average * ----- + -----
n + 1 n + 1
In practice for timing, the size of time will fit within the datatype of the computer.
As pointed out, this needs to use a floating point representation, and whilst will not fail due to overflow, can still fail when n/(n+1)
is smaller than the accuracy of the floating point fraction part.
From incremental average
There is a better reorganization.
Mi - Average
Average2 = Average + -------------
n + 1
It is better, because it only has one division.
You can reduce potential for overflow by adding to sum
the value dt/nSamples
while making sure that you don't lose dt%nSamples
.
template <typename sampleunit_t>
static inline ULONGLONG AveragePerformanceClocks (void (*f)(),
sampleunit_t nSamples)
{
ULONGLONG delta = 0;
ULONGLONG sum = 0;
sampleunit_t i;
for (i = 0; i < nSamples; ++i) {
LARGE_INTEGER t1;
LARGE_INTEGER t2;
ULONGLONG dt;
QueryPerformanceCounter(&t1);
f();
QueryPerformanceCounter(&t2);
dt = t2.QuadPart - t1.QuadPart;
// Reduce the potential for overflow.
delta += (dt%nSamples);
sum += (dt/nSamples);
sum += (delta/nSamples);
delta = (delta%nSamples);
}
return sum;
}
To prevent an overflow of a sum value in a calculation, you can normalize the base values:
Let's say that your input is data is:
20
20
20
20
20
The sum
would be 100, the average
20 and the count
5.
If now a new value, 30, would be added and I would be using a 7 bits integer as value to store the sum
in, you would hit the overflow and have an issue.
The trick is to normalize:
average
value, let's call it new_val_norm
average
value and divide it by the average
(so 1.000), and multiply by the count
, let's call this avg_norm
new_val_norm
to the avg_norm
value, divide by the count
+1 (we just added one extra value), and multiply by average
to get the new avg value.The risk of overflow is then pushed away for the sum since it is just not used anymore.
If the avg * count (avg_norm
) is still to large, you can also opt to divide the new value by avg and count, and adding 1 to that.