问题
I'm constructing a micro-benchmark to measure performance changes as I experiment with the use of SIMD instruction intrinsics in some primitive image processing operations. However, writing useful micro-benchmarks is difficult, so I'd like to first understand (and if possible eliminate) as many sources of variation and error as possible.
One factor that I have to account for is the overhead of the measurement code itself. I'm measuring with RDTSC, and I'm using the following code to find the measurement overhead:
extern inline unsigned long long __attribute__((always_inline)) rdtsc64() {
unsigned int hi, lo;
__asm__ __volatile__(
"xorl %%eax, %%eax\n\t"
"cpuid\n\t"
"rdtsc"
: "=a"(lo), "=d"(hi)
: /* no inputs */
: "rbx", "rcx");
return ((unsigned long long)hi << 32ull) | (unsigned long long)lo;
}
unsigned int find_rdtsc_overhead() {
const int trials = 1000000;
std::vector<unsigned long long> times;
times.resize(trials, 0.0);
for (int i = 0; i < trials; ++i) {
unsigned long long t_begin = rdtsc64();
unsigned long long t_end = rdtsc64();
times[i] = (t_end - t_begin);
}
// print frequencies of cycle counts
}
When running this code, I get output like this:
Frequency of occurrence (for 1000000 trials):
234 cycles (counted 28 times)
243 cycles (counted 875703 times)
252 cycles (counted 124194 times)
261 cycles (counted 37 times)
270 cycles (counted 2 times)
693 cycles (counted 1 times)
1611 cycles (counted 1 times)
1665 cycles (counted 1 times)
... (a bunch of larger times each only seen once)
My questions are these:
- What are the possible causes of the bi-modal distribution of cycle counts generated by the code above?
- Why does the fastest time (234 cycles) only occur a handful of times—what highly unusual circumstance could reduce the count?
Further Information
Platform:
- Linux 2.6.32 (Ubuntu 10.04)
- g++ 4.4.3
- Core 2 Duo (E6600); this has constant rate TSC.
SpeedStep has been turned off (processor is set to performance mode and is running at 2.4GHz); if running in 'ondemand' mode, I get two peaks at 243 and 252 cycles, and two (presumably corresponding) peaks at 360 and 369 cycles.
I'm using sched_setaffinity
to lock the process to one core. If I run the test on each core in turn (i.e., lock to core 0 and run, then lock to core 1 and run), I get similar results for the two cores, except that the fastest time of 234 cycles tends to occur slightly fewer times on core 1 than on core 0.
Compile command is:
g++ -Wall -mssse3 -mtune=core2 -O3 -o test.bin test.cpp
The code that GCC generates for the core loop is:
.L105:
#APP
# 27 "test.cpp" 1
xorl %eax, %eax
cpuid
rdtsc
# 0 "" 2
#NO_APP
movl %edx, %ebp
movl %eax, %edi
#APP
# 27 "test.cpp" 1
xorl %eax, %eax
cpuid
rdtsc
# 0 "" 2
#NO_APP
salq $32, %rdx
salq $32, %rbp
mov %eax, %eax
mov %edi, %edi
orq %rax, %rdx
orq %rdi, %rbp
subq %rbp, %rdx
movq %rdx, (%r8,%rsi)
addq $8, %rsi
cmpq $8000000, %rsi
jne .L105
回答1:
RDTSC
can return inconsistent results for a number of reasons:
- On some CPUs (especially certain older Opterons), the TSC isn't synchronized between cores. It sounds like you're already handling this by using
sched_setaffinity
-- good! - If the OS timer interrupt fires while your code is running, there'll be a delay introduced while it runs. There's no practical way to avoid this; just throw out unusually high values.
- Pipelining artifacts in the CPU can sometimes throw you off by a few cycles in either direction in tight loops. It's perfectly possible to have some loops that run in a non-integer number of clock cycles.
- Cache! Depending on the vagaries of the CPU cache, memory operations (like the write to
times[]
) can vary in speed. In this case, you're fortunate that thestd::vector
implementation being used is just a flat array; even so, that write can throw things off. This is probably the most significant factor for this code.
I'm not enough of a guru on the Core2 microarchitecture to say exactly why you're getting this bimodal distribution, or how your code ran faster those 28 times, but it probably has something to do with one of the reasons above.
回答2:
The Intel Programmer's manual recommends you use lfence;rdtsc
or rdtscp
if you want to ensure that instructions prior to the rdtsc
have actually executed. This is because rdtsc
isn't a serializing instruction by itself.
回答3:
You should make sure that frequency throttling/green functionality is disabled at the OS level. Restart the machine. You may otherwise have a situation where the cores have unsynchronized time stamp counter values.
The 243 reading is by far the most common which is one reason for using it. On the other hand suppose you get an elapsed time <249: you subtract the overhead and get an underflow. Since the arithmetic is unsigned you end up with an enormous result. This fact speaks for using the lowest reading (243) instead. It is extremely difficult to accurately measure sequences that are only a few cycles long. On a typical x86 @ a few GHz I'd recommend against timing sequences shorter than 10ns and even at that length they would typically be far from rock-solid.
The rest of my answer here is what I do, how I handle the results and my reasoning on the subject matter.
As to overhead the easiest way is to use code such as this
unsigned __int64 rdtsc_inline (void);
unsigned __int64 rdtsc_function (void);
The first form emits the rdtsc instruction into the generated code (as in your code). The second will cause a the function to be called, the rdtsc executed and a return instruction. Perhaps it will generate stack frames. Obviously the second form is much slower than the first.
The (C) code for overhead calculation can then be written
unsigned __int64 start_cycle,end_cycle; /* place these @ the module level*/
unsigned __int64 overhead;
/* place this code inside a function */
start_cycle=rdtsc_inline();
end_cycle=rdtsc_inline();
overhead=end_cycle-start_cycle;
If you are using the inline variant you will get a low(er) overhead. You will also run the risk of calculating an overhead which is greater than it "should" be (especially for the function form) which in turn means that if you measure very short/fast sequences you may run into having a previously calculated overhead which is greater than the measurement itself. When you attempt to adjust for the overhead you will get an underflow which will lead to messy conditions. The best way to handle this is to
- time the overhead several times and always use the smallest value achieved,
- not measure really short code sequences as you may run into pipelining effects which will require messy synchronising instructions before the rdtsc instruction and
- if you must measure very short sequences regard the results as indications rather than as facts
I've previously discussed what I do with the results in this thread.
Another thing that I do is to integrate the measurement code into the application. The overhead is insignificant. After a result has been computed I send it to a special structure where I count the number of measurements, sum x and x^2 values and determine min and max measurements. Later on I can use the data to calculate average and standard deviation. The structure itself is indexed and I can measure different performance aspects such individual application functions ("functional performance"), time spent in cpu, disk read/writing, network read/writing ("non-functional performance") etc.
If an application is instrumented in this manner and monitored from the very beginning I expect that the risk of it having performance problems during its lifetime will be greatly reduced.
来源:https://stackoverflow.com/questions/6432669/variance-in-rdtsc-overhead