I did some research on internet but still confused. Is UNIX time universal time like GMT/UTC or does it vary from place to place like a local time?
I know UNIX time is counted from 1st Jan, 1970 00:00:00 GMT. When I use getTime() function in Java (more specifically Date d= new Date(); long currentTime d.getTime()) I am getting the UNIX time in milliseconds. Now If person A and person B use the same function who are sitting in two different time zones, will they get the same result?
Now If person A and person B use the same function who are sitting in two different time zones, will they get the same result?
Yes, they will - assuming their clocks are both "correct" of course.
The java.util.Date
class is basically a wrapper around "the time since the Unix epoch, in milliseconds". Given that the Unix epoch was an instant in time (not just "midnight on January 1st 1970", the number of elapsed milliseconds is the same wherever you are. (Ignoring relativity and any discussion of leap seconds...)
(Side-note: at the Unix epoch, it wasn't midnight in Greenwich. It was 1am, because the UK was observing BST at the time. That's British Standard Time, not British Summer Time - the UK was at UTC+1 from Feb 18th 1968 to October 31st 1971. For more similar trivia, see the Noda Time user guide trivia page.)
The answer by Jon Skeet is correct. I'll add a few thoughts.
Unix time means different things to different people. As that Wikipedia article describes, the basic idea is usually a count of seconds since epoch, with epoch being the first moment of 1970 in the UTC time zone. As the name suggests, this approach to time tracking was used in Unix-like operating systems.
Locality
Does it vary by locality? No. By definition, it represents UTC time zone. So a moment in Unix time means the same simultaneous moment in Auckland, Paris, and Montréal. The UT
in UTC
means "Universal Time".
Is Unix time universal in the sense of used everywhere? No, certainly not.
Granularity
First, the granularity. As computer clock chips became more precise, conventional computer systems moved to tracking time by millisecond, microsecond, and even nanosecond. Different software assumes different granularity of time tracking. The java.util.Date/.Calendar classes and Joda-Time library both use millisecond resolution, while the newer java.time package built into Java 8 assumes nanosecond resolution. Some databases such as Postgres typically assume microsecond resolution.
To quote the Question…
I am getting the UNIX time in milliseconds
Technically a contradiction in terms, as traditional Unix time or POSIX time is tracked by whole seconds rather than milliseconds.
Epoch
Secondly, the epoch. The first moment of 1970 is far from the only epoch used by various computer systems. A couple dozen epochs have been used, some with very wide usage. For example, Microsoft Excel and Lotus 1-2-3 spreadsheets, Cocoa, GPS satellites, Galileo satellites, DOS & FAT file systems, and ntp (Network Time Protocol) each using a different epoch ranging from the years 1899 to 2001.
Avoid Count-From-Epoch
Generally best to avoid focusing on handling date-time values by counting milliseconds (or any granularity) from epoch. Such values are difficult to read and comprehend by humans thereby making debugging difficult and mistakes non-obvious. Add on the possible mistakes from assumptions about the granularity and/or epochs discussed above.
Instead use a decent date-time library. In Java that means either:
Do you track text by collecting groups of 7 or 8 bits? No, you use classes and libraries to do the heavy-lifting of handling character sets, character encoding, and such. Do the same for date-time work.
来源:https://stackoverflow.com/questions/29816286/is-unix-time-universal