问题
Given an x86 with a constant TSC, which is useful for measuring real time, how can one convert between the "units" of TSC reference cycles and normal human real-time units like nanoseconds using the TSC calibration factor calculated by Linux at boot-time?
That is, one can certainly calculate the TSC frequency in user-land by taking TSC and clock measurements (e.g., with CLOCK_MONOTONIC
) at both ends of some interval to determine the TSC frequency, but Linux has already made this calculation at boot-time since it internally uses the TSC to help out with time-keeping.
For example, you can see the kernel's result with dmesg | grep tsc
:
[ 0.000000] tsc: PIT calibration matches HPET. 2 loops
[ 0.000000] tsc: Detected 3191.922 MHz processor
[ 1.733060] tsc: Refined TSC clocksource calibration: 3192.007 MHz
In a worse-case scenario I guess you could try to grep the result out of dmesg
at runtime, but that frankly seems terrible, fragile and all sorts of bad0.
The advantages of using the kernel-determined calibration time are many:
- You don't have to write a TSC calibration routine yourself, and you can be pretty sure the Linux one is best-of-breed.
- You automatically pick up new techniques in TSC calibration as new kernels come out using your existing binary (e.g., recently chips started advertising their TSC frequency using
cpuid
leaf 0x15 so calibration isn't always necessary). - You don't slow down your startup with a TSC calibtration.
- You use the same TSC value on every run of your process (at least until reboot).
- Your TSC frequency is somehow "consistent" with the TSC frequency used by OS time-keeping functions such as
gettimeofday
andclock_gettime
1. - The kernel is able to do the TSC calibration very early at boot, in kernel mode, free from the scourges of interrupts, other processes and is able to access the underlying hardware timers direction as its calibration source.
It's not all gravy though, some downsides of using Linux's TSC calibration include:
- It won't work on every Linux installation (e.g., perhaps those that don't use a tsc clocksource) or on other OSes at all, so you may still be stuck writing a fallback calibration method.
- There is some reason to believe that a "recent" calibration may be more accurate than an old one, especially one taken right after boot: the crystal behavior may change, especially as temperatures change, so you may get a more accurate frequency by doing it manually close to the point where you'll use it.
0 For example: systems may not have dmesg
installed, you may not be able to run it as a regular user, the accumulated output may have wrapped around so the lines are no longer present, you may get false positives on your grep, the kernel messages are English prose and subject to change, it may be hard to launch a sub-process, etc, etc.
1 It is somewhat debatable whether this matters - but if you are mixing rdtsc
calls in with code that also uses OS time-keeping, it may increase precision.
来源:https://stackoverflow.com/questions/51919219/determine-tsc-frequency-on-linux