Currently I\'m getting the execution wall time of my program in seconds by calling:
time_t startTime = time(NULL);
//section of code
time_t
gprof
, which is part of the GNU toolkit, is an option. Most POSIX systems will have it installed, and it's available under Cygwin for Windows. Tracking the time yourself using gettimeofday()
works fine, but it's the performance equivalent of using print statements for debugging. It's good if you just want a quick and dirty solution, but it's not quite as elegant as using proper tools.
To use gprof
, you must specify the -pg option when compiling with gcc
as in:
gcc -o prg source.c -pg
Then you can run gprof
on the generated program as follows:
gprof prg > gprof.out
By default, gprof will generate the overall runtime of your program, as well as the amount of time spent in each function, the number of times each function was called, the average time spent in each function call, and similar information.
There are a large number of options you can set with gprof
. If you're interested, there is more information in the man pages or through Google.
If you can do this outside of the program itself, in linux, you can use time
command (time ./my_program
).
If you're on a POSIX-ish machine, use gettimeofday() instead; that gives you reasonable portability and microsecond resolution.
Slightly more esoteric, but also in POSIX, is the clock_gettime() function, which gives you nanosecond resolution.
On many systems, you will find a function ftime()
that actually returns you the time in seconds and milliseconds. However, it is no longer in the Single Unix Specification (roughly the same as POSIX). You need the header <sys/timeb.h>
:
struct timeb mt;
if (ftime(&mt) == 0)
{
mt.time ... seconds
mt.millitime ... milliseconds
}
This dates back to Version 7 (or 7th Edition) Unix at least, so it has been very widely available.
I also have notes in my sub-second timer code on times()
and clock()
, which use other structures and headers again. I also have notes about Windows using clock()
with 1000 clock ticks per second (millisecond timing), and an older interface GetTickCount()
which is noted as necessary on Windows 95 but not on NT.
I recently wrote a blog post that explains how to obtain the time in milliseconds cross-platform.
It will work like time(NULL), but will return the number of milliseconds instead of seconds from the unix epoch on both windows and linux.
#ifdef WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif
/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
* windows and linux. */
int64 GetTimeMs64()
{
#ifdef WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;
uint64 ret;
/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */
return ret;
#else
/* Linux */
struct timeval tv;
uint64 ret;
gettimeofday(&tv, NULL);
ret = tv.tv_usec;
/* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
ret /= 1000;
/* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
ret += (tv.tv_sec * 1000);
return ret;
#endif
}
You can modify it to return microseconds instead of milliesconds if you want.
The open-source GLib library has a GTimer system that claims to provide microsecond accuracy. That library is available on Mac OS X, Windows, and Linux. I'm currently using it to do performance timings on Linux, and it seems to work perfectly.
On Windows, use QueryPerformanceCounter and the associated QueryPerformanceFrequency. They don't give you a time that is translatable to calendar time, so if you need that then ask for the time using a CRT API and then immediately use QueryPerformanceCounter. You can then do some simple addition/subtraction to calculate the calendar time, with some error due to the time it takes to execute the API's consecutively. Hey, it's a PC, what did you expect???