I am calculating time elapsed in milli seconds for each successive call to handler function using the code below. When i use usleep(1000) i.e. 1 ms time difference between each
usleep
is specified to sleep at least the amount you give it, but it can sleep much longer. There's almost no upper bound on how long it will sleep because the operating system doesn't have to run your process if it has more important processes to run.
In practice the resolution of how long usleep
will sleep is decided by the clocks the operating system uses. Up until a few years ago most unix-like systems used a static 100Hz timer (or 1024Hz in some rarer cases) to drive timers like this, so your usleep would always get rounded up to the nearest 10ms.
There has been some work done recently on some systems to remove the static clock, although this hasn't been driven as much by the need for higher resolution sleeps, but rather by the fact that constantly waking up the cpu for a static clock tick is bad for power consumption. It can have a side effect of improving the timer resolution, but that in turn exposes bugs in applications that used very short sleeps and appeared to behave correctly. Suddenly with a higher resolution of timeouts in usleep
/nanosleep
/poll
/select
those short sleeps lead to applications spinning on the cpu rescheduling their sleeps all the time.
I'm not sure what the state of this is today, but from your 10ms it looks like your system still uses a 100Hz clock for its internal timers or that it deliberately slows down timeouts to a 10ms resolution to prevent applications from breaking.
Remember that the tv_usec
field of the timeval
structure never goes to (or over) one million, instead the number of seconds is increased.
You have to use the tv_sec
field as well in your calculation.