I\'m writing a portable Socket class that supports timeouts for both sending and receiving... To implement these timeouts I\'m using select()
.... But, I sometim
How about:
unsigned long start = GetTickCount();
// stuff that needs to be timed
unsigned long delta = GetTickCount() - start;
GetTickCount()
is not very precise, but will probably work well. If you see a lot of 0, 16 or 31 millisecond intervals, try timing over longer intervals or use a more precise function like timeGetTime
.
What I usually do is this:
unsigned long deltastack;
int samples = 0;
float average;
unsigned long start = GetTickCount();
// stuff that needs to be timed
unsigned long delta = GetTickCount() - start;
deltastack += delta;
if (samples++ == 10)
{
// total time divided by amount of samples
average = (float)deltastack / 10.f;
deltastack = 0;
samples = 0;
}
You can check out QueryPerformanceCounter and QueryPerformanceFrequency. These are very high resolution- down to one tick per ten cycles on some hardware- timers.
I ended up finding this page: gettimeofday() function for Windows (now via the Wayback Machine) which has a handy, dandy implementation of gettimeofday()
on Windows. It uses the GetSystemTimeAsFileTime()
method to get an accurate clock.
Update: Here's an alternative active link from the 'Unix to Windows Porting Dictionary for HPC' gettimeofday() (now via the Wayback Machine) that points to the implementation the OP referred to. Note also that there's a typo in the linked implementation:
#if defined(_MSC_VER) || defined(_MSC_EXTENSIONS)
#define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64 // WRONG
#else
#define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL // WRONG
#endif
The values shown are missing an extra 0
at the end (they assumed microseconds, not the number of 100-nanosecond intervals). This typo was found via this comment on a Google code project page. The correct values to use are shown below:
#if defined(_MSC_VER) || defined(_MSC_EXTENSIONS)
#define DELTA_EPOCH_IN_MICROSECS 116444736000000000Ui64 // CORRECT
#else
#define DELTA_EPOCH_IN_MICROSECS 116444736000000000ULL // CORRECT
#endif
PostgreSQL's implementation of gettimeofday for Windows:
/*
* gettimeofday.c
* Win32 gettimeofday() replacement
*
* src/port/gettimeofday.c
*
* Copyright (c) 2003 SRA, Inc.
* Copyright (c) 2003 SKC, Inc.
*
* Permission to use, copy, modify, and distribute this software and
* its documentation for any purpose, without fee, and without a
* written agreement is hereby granted, provided that the above
* copyright notice and this paragraph and the following two
* paragraphs appear in all copies.
*
* IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
* INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
* LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
* DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*
* THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
* IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
* SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
*/
#include "c.h"
#include <sys/time.h>
/* FILETIME of Jan 1 1970 00:00:00. */
static const unsigned __int64 epoch = ((unsigned __int64) 116444736000000000ULL);
/*
* timezone information is stored outside the kernel so tzp isn't used anymore.
*
* Note: this function is not for Win32 high precision timing purpose. See
* elapsed_time().
*/
int
gettimeofday(struct timeval * tp, struct timezone * tzp)
{
FILETIME file_time;
SYSTEMTIME system_time;
ULARGE_INTEGER ularge;
GetSystemTime(&system_time);
SystemTimeToFileTime(&system_time, &file_time);
ularge.LowPart = file_time.dwLowDateTime;
ularge.HighPart = file_time.dwHighDateTime;
tp->tv_sec = (long) ((ularge.QuadPart - epoch) / 10000000L);
tp->tv_usec = (long) (system_time.wMilliseconds * 1000);
return 0;
}
In your case I would use the platform independent std::clock