I know I can use IRQ0, which is the system timer, but this is based on a 14.31818MHz clock, right? Is there anything offering greater precision?
Thanks.
Edit: Does anyone know what the Windows function QueryPerformanceCounter uses?
"Precision" and "accuracy" mean different things. "The Earth's circumference is 40000.000000000 km" is precise, but not accurate. It's a bit more complicated with clocks:
- Resolution: time between ticks, or period of ticks. (You could probably call it "precision", but I think "resolution" has a more obvious meaning.)
- Skew: relative difference between nominal and actual clock frequency (ish)
- Drift: rate of change of skew (due to aging, temperature, ...).
- Jitter: Random variation in tick timing.
- Latency: How long it takes to get a timestamp.
Even though the "system timer" (PIT according to Wikipedia) runs at 1.something MHz, you generally get IRQ0 somewhere between 100 and 1000 Hz. Apparently you can also read from from port 0x40 twice to get the current counter value, but I'm not sure what kind of latency this has (and then you get number of counts until the next interrupt, so you need to do some math). It also doesn't work on more modern "tickless" kernels.
There are a few other high-frequency timers:
- Local APIC, which is based on the bus frequency and a power-of-2 divider. I can't find any documentation on how to read it though (presumably it's an I/O port?).
- ACPI power management timer (acpi_pm in Linux; I think, and the /UsePMTimer Windows boot flag), which is about 3.58 MHz according to this. IIRC, reading it is a bit expensive.
- HPET, which is at least 10 MHz according to the same link (but it can be higher). It's also supposed to have lower latency than the ACPI PM timer.
- TSC (with caveats). Almost certainly the lowest latency, and probably the highest frequency as well. (But apparently it can go up by more than 1 every "tick", so the counts-per-second isn't necessarily the same as the resolution.)
Darwin (i.e. OS X) appears to assume that the TSC frequency does not change, and adjusts the base value added to it when waking up from a sleep state where the TSC is not running (apparently C4 and greater). There's a different base value per CPU, because the TSC need not be synchronized across CPUs. You have to put in a reasonable amount of effort to get a sensible timestamp.
IIRC, Linux just picks a single clock source (TSC, if it's sane, and then HPET, and then ACPI PM, I think).
IIRC, QueryPerformanceCounter() uses whatever Windows thinks is best. It depends somewhat on Windows version too (XP supposedly doesn't support HPET for interrupts, so presumably it doesn't for timestamps either). You can call QueryPerformanceFrequency() to make a guess (I get 1995030000, which probably means it's the TSC).
Intel processors usually have high precision timer information available via the rdtsc instruction.
It has much higher precision than 14 MHz¹. The caveat is that it can have issues on multi-core and speed stepping processors.
Edit: This question has a a lot more detail on this subject.
1. The actual frequency depends on the processor - but is often the processor frequency. Apparently on Nehalem processors the TSC runs at the front side bus frequency (133 MHz).
来源:https://stackoverflow.com/questions/3835111/whats-the-most-accurate-way-of-measuring-elapsed-time-in-a-modern-pc