Sorry, it\'s a long one, but I\'m just explaining my train of thought as I analyze this. Questions at the end.
I have an understanding of what goes into measuring runni
I tend to agree with @Sam Saffron about using one Stopwatch rather than one per iteration. In your example you performing 1000000 iterations by default. I don't know what the cost of creating a single Stopwatch is, but you are creating 1000000 of them. Conceivably, that in and of itself could affect your test results. I reworked your "final implementation" a little bit to allow the measurement of each iteration without creating 1000000 Stopwatches. Granted, since I am saving the result of each iteration, I am allocating 1000000 longs, but at first glance it seems like that would have less overall affect than allocating that many Stopwatches. I haven't compared my version to your version to see if mine would yield different results.
static void Test2(string testName, Func test, int iterations = 1000000)
{
long [] results = new long [iterations];
// print header
for (int i = 0; i < 100; i++) // warm up the cache
{
test();
}
var timer = System.Diagnostics.Stopwatch.StartNew(); // time whole process
long start;
for (int i = 0; i < results.Length; i++)
{
start = Stopwatch.GetTimestamp();
test();
results[i] = Stopwatch.GetTimestamp() - start;
}
timer.Stop();
double ticksPerMillisecond = Stopwatch.Frequency / 1000.0;
Console.WriteLine("Time(ms): {0,3}/{1,10}/{2,8} ({3,10})", results.Min(t => t / ticksPerMillisecond), results.Average(t => t / ticksPerMillisecond), results.Max(t => t / ticksPerMillisecond), results.Sum(t => t / ticksPerMillisecond));
Console.WriteLine("Ticks: {0,3}/{1,10}/{2,8} ({3,10})", results.Min(), results.Average(), results.Max(), results.Sum());
Console.WriteLine();
}
I am using the Stopwatch's static GetTimestamp method twice in each iteration. The delta between will be the amount of time spent in the iteration. Using Stopwatch.Frequency, we can convert the delta values to milliseconds.
Using Timestamp and Frequency to calculate performance is not necessarily as clear as just using a Stopwatch instance directly. But, using a different stopwatch for each iteration is probably not as clear as using a single stopwatch to measure the whole thing.
I don't know that my idea is any better or any worse than yours, but it is slightly different ;-)
I also agree about the warmup loop. Depending on what your test is doing, there could be some fixed startup costs that you don't want to affect the overall results. The startup loop should eliminate that.
There is proabably a point at which keeping each individual timing result is counterproductive due to the cost of storage necessary to hold the whole array of values (or timers). For less memory, but more processing time, you could simply sum the deltas, computing the min and max as you go. That has the potential of throwing off your results, but if you are primarily concerned with the statistics generated based on the invidivual iteration measurements, then you can just do the min and max calculation outside of the time delta check:
static void Test2(string testName, Func test, int iterations = 1000000)
{
//long [] results = new long [iterations];
long min = long.MaxValue;
long max = long.MinValue;
// print header
for (int i = 0; i < 100; i++) // warm up the cache
{
test();
}
var timer = System.Diagnostics.Stopwatch.StartNew(); // time whole process
long start;
long delta;
long sum = 0;
for (int i = 0; i < iterations; i++)
{
start = Stopwatch.GetTimestamp();
test();
delta = Stopwatch.GetTimestamp() - start;
if (delta < min) min = delta;
if (delta > max) max = delta;
sum += delta;
}
timer.Stop();
double ticksPerMillisecond = Stopwatch.Frequency / 1000.0;
Console.WriteLine("Time(ms): {0,3}/{1,10}/{2,8} ({3,10})", min / ticksPerMillisecond, sum / ticksPerMillisecond / iterations, max / ticksPerMillisecond, sum);
Console.WriteLine("Ticks: {0,3}/{1,10}/{2,8} ({3,10})", min, sum / iterations, max, sum);
Console.WriteLine();
}
Looks pretty old school without the Linq operations, but it still gets the job done.