In my dataset I have two timestamp columns. The first is microseconds since application was started - e.g., 1400805323. The second is described as 64bit timestamp which I\'m
Assuming that these values were generated today, June 6th 2011, these values look like number of 100-nanosecond intervals since Jan 1st year 1601. This is how Windows NT stores FILETIME. For more concentrated info on this read this blog post of Raymond Chen. These articles also show how to convert it to anything else
See edit below for updated answer:
For NTP time, the 64bits are broken in to seconds and fraction of seconds. The top 32 bits is the seconds. The bottom 32 bits is the fraction of seconds. You get the fraction by dividing the fraction part by 2^32.
So step one, convert to a double.
If you like python that's easy enough, I didn't add any bounds checking:
def to_seconds(h):
return (h>>32) + ((float)(h&0xffffffff))/pow(2,32)
>>> to_seconds(129518309081725000)
30155831.26845886
The time module can covert that float to a readable time format.
import time
time.ctime(to_seconds(ntp_timestamp))
You'll need to worry about where the timestamp originated though. time.ctime assumes reletive to the Jan 1, 1970. So if your program is basing the ntp formats of time since program run, you'd need to add to the seconds to normalize the timestamp for ctime.
>>> time.ctime(to_seconds(129518309081725000))
'Tue Dec 15 17:37:11 1970'
EDIT: PyGuy is right, the original timestamps are not ntp time numbers, they are Windows 64 bit timestamps.
Here is the new to_seconds method to convert the 100ns interval based on 1/1/1601 to the 1970 seconds interval:
def to_seconds(h):
s=float(h)/1e7 # convert to seconds
return s-11644473600 # number of seconds from 1601 to 1970
And the new output:
import time
time.ctime(to_seconds(129518309081725000))
'Mon Jun 6 04:48:28 2011'