CLOCK_MONOTONIC does not seem available, so clock_gettime is out.
I\'ve read in some places that mach_absolute_time() might be the right way to go, but after reading
Just use Mach Time.
It is public API, it works on macOS, iOS, and tvOS and it works from within the sandbox.
Mach Time returns an abstract time unit that I usually call "clock ticks". The length of a clock tick is system specific and depends on the CPU. On current Intel systems a clock tick is in fact exactly one nanosecond but you cannot rely on that (may be different for ARM and it certainly was different for PowerPC CPUs). The system can also tell you the conversion factor to convert clock ticks to nanoseconds and nanoseconds to clock ticks (this factor is static, it won't ever change at runtime). When your system boots, the clock starts at 0
and then monotonically increases with every clock tick thereafter, so you can also use Mach Time to get the uptime of your system (and, of course, uptime is monotonic!).
Here's some code:
#include <stdio.h>
#include <inttypes.h>
#include <mach/mach_time.h>
int main ( ) {
uint64_t clockTicksSinceSystemBoot = mach_absolute_time();
printf("Clock ticks since system boot: %"PRIu64"\n",
clockTicksSinceSystemBoot
);
static mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
// Cast to double is required to make this a floating point devision,
// otherwise it would be an interger division and only the result would
// be converted to floating point!
double clockTicksToNanosecons = (double)timebase.numer / timebase.denom;
uint64_t systemUptimeNanoseconds = (uint64_t)(
clockTicksToNanosecons * clockTicksSinceSystemBoot
);
uint64_t systemUptimeSeconds = systemUptimeNanoseconds / (1000 * 1000 * 1000);
printf("System uptime: %"PRIu64" seconds\n", systemUptimeSeconds);
}
You can also put a thread to sleep until a certain Mach Time has been reached. Here's some code for that:
// Sleep for 750 ns
uint64_t machTimeNow = mach_absolute_time();
uint64_t clockTicksToSleep = (uint64_t)(750 / clockTicksToNanosecons);
uint64_t machTimeIn750ns = machTimeNow + clockTicksToSleep;
mach_wait_until(machTimeIn750ns);
As Mach Time has no relation to any wallclock time, you can play around with your system date and time setting as you like, that won't have any effect on Mach Time.
There's one special consideration, though, that may make Mach Time unsuitable for certain use cases: The CPU clock is not running while your system is asleep! So if you make a thread wait for 5 minutes and after 1 minute the system goes to sleep and stays asleep for 30 minutes, the thread is still waiting another 4 minutes after the system has woken up as the 30 minutes sleep time don't count! The CPU clock was resting as well during that time. Yet in other cases this is exactly what you want to happen.
Mach Time is also a very precise way to measure time spent. Here's some code showing that task:
// Measure time
uint64_t machTimeBegin = mach_absolute_time();
sleep(1);
uint64_t machTimeEnd = mach_absolute_time();
uint64_t machTimePassed = machTimeEnd - machTimeBegin;
uint64_t timePassedNS = (uint64_t)(
machTimePassed * clockTicksToNanosecons
);
printf("Thread slept for: %"PRIu64" ns\n", timePassedNS);
You will see that the thread doesn't sleep for exactly one second, that's because it takes some time to put a thread to sleep, to wake it back up again and even when awake, it won't get CPU time immediately if all cores are already busy running a thread at that moment.
Since macOS 10.12 (Sierra) there also exists mach_continuous_time
. The only difference between mach_continuous_time
and mach_absolute_time
is that continues time also advances when the system is asleep. So in case this was a problem so far and a reason for not using Mach Time, 10.12 and up offer a solution to this problem. The usage is exactly the same as described above.
Also starting with macOS 10.9 (Mavericks), there is a mach_approximate_time
and in 10.12 there's also a mach_continuous_approximate_time
. These two are identical to mach_absolute_time
and mach_continuous_time
with the only difference, that they are faster yet less accurate. The standard functions require a call into the kernel as the kernel takes care of Mach Time. Such a call is somewhat expensive, especially on systems that already have a Meltdown fix. The approximate versions won't have to always call into the kernel. They use a clock in user space that is only synchronized with the kernel clock from time to time to prevent that it is running too far out of sync, yet a small deviation is always possible and thus it is only the "approximate" Mach Time.
After looking up a few different answers for this I ended up defining a header which emulates clock_gettime on mach:
#include <sys/types.h>
#include <sys/_types/_timespec.h>
#include <mach/mach.h>
#include <mach/clock.h>
#ifndef mach_time_h
#define mach_time_h
/* The opengroup spec isn't clear on the mapping from REALTIME to CALENDAR
being appropriate or not.
http://pubs.opengroup.org/onlinepubs/009695299/basedefs/time.h.html */
// XXX only supports a single timer
#define TIMER_ABSTIME -1
#define CLOCK_REALTIME CALENDAR_CLOCK
#define CLOCK_MONOTONIC SYSTEM_CLOCK
typedef int clockid_t;
/* the mach kernel uses struct mach_timespec, so struct timespec
is loaded from <sys/_types/_timespec.h> for compatability */
// struct timespec { time_t tv_sec; long tv_nsec; };
int clock_gettime(clockid_t clk_id, struct timespec *tp);
#endif
and in mach_gettime.c
#include "mach_gettime.h"
#include <mach/mach_time.h>
#define MT_NANO (+1.0E-9)
#define MT_GIGA UINT64_C(1000000000)
// TODO create a list of timers,
static double mt_timebase = 0.0;
static uint64_t mt_timestart = 0;
// TODO be more careful in a multithreaded environement
int clock_gettime(clockid_t clk_id, struct timespec *tp)
{
kern_return_t retval = KERN_SUCCESS;
if( clk_id == TIMER_ABSTIME)
{
if (!mt_timestart) { // only one timer, initilized on the first call to the TIMER
mach_timebase_info_data_t tb = { 0 };
mach_timebase_info(&tb);
mt_timebase = tb.numer;
mt_timebase /= tb.denom;
mt_timestart = mach_absolute_time();
}
double diff = (mach_absolute_time() - mt_timestart) * mt_timebase;
tp->tv_sec = diff * MT_NANO;
tp->tv_nsec = diff - (tp->tv_sec * MT_GIGA);
}
else // other clk_ids are mapped to the coresponding mach clock_service
{
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), clk_id, &cclock);
retval = clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
tp->tv_sec = mts.tv_sec;
tp->tv_nsec = mts.tv_nsec;
}
return retval;
}
The Mach kernel provides access to system clocks, out of which at least one (SYSTEM_CLOCK
) is advertised by the documentation as being monotonically incrementing.
#include <mach/clock.h>
#include <mach/mach.h>
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), SYSTEM_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
mach_timespec_t
has nanosecond precision. I'm not sure about the accuracy, though.
Mac OS X supports three clocks:
SYSTEM_CLOCK
returns the time since boot time;CALENDAR_CLOCK
returns the UTC time since 1970-01-01;REALTIME_CLOCK
is deprecated and is the same as SYSTEM_CLOCK
in its current implementation.The documentation for clock_get_time says the clocks are monotonically incrementing unless someone calls clock_set_time
. Calls to clock_set_time
are discouraged as it could break the monotonic property of the clocks, and in fact, the current implementation returns KERN_FAILURE without doing anything.