问题
The C standard (ISO/IEC 9899) states:
7.2x.2.2 The
difftime
functionSynopsis
#include <time.h> double difftime(time_t time1, time_t time0);
Description
The
difftime
function computes the difference between two calendar times:time1 - time0
.Returns
The
difftime
function returns the difference expressed in seconds as adouble
.
This leaves it ambiguous (I guess, intentionally) if the result accounts for leap seconds or not. The difference (26 seconds when comparing from 1970 to July 2015) matters in some applications.
Most implementations of the C standard library do not account for leap seconds, and this is testable: the following (intentionally terse) code tends to output leap seconds accounted for from 2000/01/01 to 2015/01/01: 0
(or -473385600
if mktime
is inoperative), when there really has been 3 leap seconds over that period.
#include <time.h>
#include <stdio.h>
struct tm t0,t1; // globals, thus initialized to zero
int main(void) {
t0.tm_year = 2000-1900; // from 2000
t1.tm_year = 2015-1900; // to 2015
t0.tm_mday = t1.tm_mday = 1; // first day of January
printf("leap seconds accounted for from 2000/01/01 to 2015/01/01: %.0f\n",
difftime( mktime(&t1), mktime(&t0) ) - 86400.*(365.*15+4) );
return 0;
}
Are there actual systems with a C/C++ standard library that accounts for leap seconds, as testable using such combination of mktime
and difftime
?
Otherwise said: many modern operating systems are made aware of legislative changes about legal time by an update mechanism, and standard library functions like localtime
do use that information and compute their result accordingly. It would be entirely possible, and C-standard-conformant as far as I see, that a similar update mechanism informs the operating system of the past and near future leap seconds, and that either difftime
or mktime
use that information. My question asks if there are such systems and standard libraries around, because that would impact some code.
Following comment: the context is code that should be portable to a variety of systems (from embedded to mainframes, some quite old) and decides when (in seconds from the time of the call, as an integer at most 99999) some action has to be triggered, based (in addition to the system time) on the given "number of" (non-leap) "seconds elapsed since 2000/01/01 at midnight in UTC" and the desired time of the action. An error of ±2 seconds (in addition to the drift of the UTC reference) is tolerable.
Existing code uses a trivial combination of time
, mktime
for 2000/01/01, and difftime
for the difference between these, followed by subtraction of the given. I wonder if there is serious concern that it could fail (and return something slightly out of the stated tolerance; like 4 too low at time of writing, and increasing). I'm not asking how to make the code portable (one option would be using gmtime(time(NULL))
and computing the rest using explicit code).
The main question is worded without time
to keep out of scope the different portability issue of if time
accounts for time zone.
回答1:
This is asked as an informatic question, but it is really a physical one.
First informatic view :
Common operating systems, know about UTC time and eventually localtime. They assume that the reference is the UTC time and that all minutes last exactly 60 seconds. They use leap seconds to compensate errors between their local time source (a quartz) and an external reference. From their point of view, there is no difference between a correction of a sliding clock and a true (physical) leap seconds. For that reason they are not aware of true leap seconds and currently ignore them
Now physical view (ref on UTC and TAI on wikipedia) :
In 1955, the caesium atomic clock was invented. This provided a form of timekeeping that was both more stable and more convenient than astronomical observations.
[In 1972, the TAI (Temps Atomique International in french) was defined, only based on the caesium atomic clock.] In the 1970s, it became clear that the clocks participating in TAI were ticking at different rates due to gravitational time dilation, and the combined TAI scale therefore corresponded to an average of the altitudes of the various clocks. Starting from Julian Date 2443144.5 (1 January 1977 00:00:00), corrections were applied to the output of all participating clocks, so that TAI would correspond to proper time at mean sea level (the geoid). Because the clocks had been on average well above sea level, this meant that TAI slowed down, by about one part in a trillion. Earth's rotational speed is very slowly decreasing because of tidal deceleration; this increases the length of the mean solar day. The length of the SI second was calibrated on the basis of the second of ephemeris timeand can now be seen to have a relationship with the mean solar day observed between 1750 and 1892, analysed by Simon Newcomb. As a result, the SI second is close to 1/86400 of a mean solar day in the mid‑19th century. In earlier centuries, the mean solar day was shorter than 86,400 SI seconds, and in more recent centuries it is longer than 86,400 seconds. Near the end of the 20th century, the length of the mean solar day (also known simply as "length of day" or "LOD") was approximately 86,400.0013 s. For this reason, UT is now "slower" than TAI by the difference (or "excess" LOD) of 1.3 ms/day.
The first leap second occurred on 30 June 1972. Since then, leap seconds have occurred on average about once every 19 months, always on 30 June or 31 December. As of July 2015, there have been 26 leap seconds in total, all positive, putting UTC 36 seconds behind TAI.
TL/DR So, if you really need it, you will have to get the date at which the 26 leap seconds were introduced in (physical) UTC and and them manually when relevant. AFAIK, no current operating system nor standard library deal with them.
A table for introduction date of leap seconds is maintained as plain text at http://www.ietf.org/timezones/data/leap-seconds.list
来源:https://stackoverflow.com/questions/31313157/are-there-actual-systems-where-difftime-accounts-for-leap-seconds