std::chrono
offer several clocks to measure times. At the same time, I guess the only way a cpu can evaluate time, is by counting cycles.
Questi
Counting cycles, yes, but cycles of what?
On a modern x86, the timesource used by the kernel (internally and for clock_gettime
and other system calls) is typically a fixed-frequency counter that counts "reference cycles" regardless of turbo, power-saving, or clock-stopped idle. (This is the counter you get from rdtsc
, or __rdtsc() in C/C++).
Normal std::chrono
implementations will use an OS-provided function like clock_gettime
on Unix. (On Linux, this can run purely in user-space, code + scale factor data in a VDSO page mapped by the kernel into every process's address space. Low-overhead timesources are nice. Avoiding a user->kernel->user round trip helps a lot with Meltdown + Spectre mitigation enabled.)
Profiling a tight loop that's not memory bound might want to use actual core clock cycles, so it will be insensitive to the actual speed of the current core. (And doesn't have to worry about ramping up the CPU to max turbo, etc.) e.g. using perf stat ./a.out
or perf record ./a.out
. e.g. Can x86's MOV really be "free"? Why can't I reproduce this at all?
Some systems didn't / don't have a wall-clock-equivalent counter built right in to the CPU, so either the OS would maintain a time in RAM that it updates on timer interrupts, or time-query functions would read the time from a separate chip.
(System call + hardware I/O = higher overhead, which is part of the reason that x86's rdtsc
instruction morphed from a profiling thing into a clocksource thing.)
All of these clock frequencies are ultimately derived from a crystal oscillator on the mobo. But the scale factors to extrapolate time from cycle counts can be adjusted to keep the clock in sync with atomic time, typically using the Network Time Protocol (NTP), as @Tony points out.