Is UNIX time universal

ⅰ亾dé卋堺 提交于 2019-12-05 16:06:46
Jon Skeet

Now If person A and person B use the same function who are sitting in two different time zones, will they get the same result?

Yes, they will - assuming their clocks are both "correct" of course.

The java.util.Date class is basically a wrapper around "the time since the Unix epoch, in milliseconds". Given that the Unix epoch was an instant in time (not just "midnight on January 1st 1970", the number of elapsed milliseconds is the same wherever you are. (Ignoring relativity and any discussion of leap seconds...)

(Side-note: at the Unix epoch, it wasn't midnight in Greenwich. It was 1am, because the UK was observing BST at the time. That's British Standard Time, not British Summer Time - the UK was at UTC+1 from Feb 18th 1968 to October 31st 1971. For more similar trivia, see the Noda Time user guide trivia page.)

Basil Bourque

The answer by Jon Skeet is correct. I'll add a few thoughts.

Unix time means different things to different people. As that Wikipedia article describes, the basic idea is usually a count of seconds since epoch, with epoch being the first moment of 1970 in the UTC time zone. As the name suggests, this approach to time tracking was used in Unix-like operating systems.

Locality

Does it vary by locality? No. By definition, it represents UTC time zone. So a moment in Unix time means the same simultaneous moment in Auckland, Paris, and Montréal. The UT in UTC means "Universal Time".

Is Unix time universal in the sense of used everywhere? No, certainly not.

Granularity

First, the granularity. As computer clock chips became more precise, conventional computer systems moved to tracking time by millisecond, microsecond, and even nanosecond. Different software assumes different granularity of time tracking. The java.util.Date/.Calendar classes and Joda-Time library both use millisecond resolution, while the newer java.time package built into Java 8 assumes nanosecond resolution. Some databases such as Postgres typically assume microsecond resolution.

To quote the Question…

I am getting the UNIX time in milliseconds

Technically a contradiction in terms, as traditional Unix time or POSIX time is tracked by whole seconds rather than milliseconds.

Epoch

Secondly, the epoch. The first moment of 1970 is far from the only epoch used by various computer systems. A couple dozen epochs have been used, some with very wide usage. For example, Microsoft Excel and Lotus 1-2-3 spreadsheets, Cocoa, GPS satellites, Galileo satellites, DOS & FAT file systems, and ntp (Network Time Protocol) each using a different epoch ranging from the years 1899 to 2001.

Avoid Count-From-Epoch

Generally best to avoid focusing on handling date-time values by counting milliseconds (or any granularity) from epoch. Such values are difficult to read and comprehend by humans thereby making debugging difficult and mistakes non-obvious. Add on the possible mistakes from assumptions about the granularity and/or epochs discussed above.

Instead use a decent date-time library. In Java that means either:

Do you track text by collecting groups of 7 or 8 bits? No, you use classes and libraries to do the heavy-lifting of handling character sets, character encoding, and such. Do the same for date-time work.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!