Time in milliseconds in C

六月ゝ 毕业季﹏ 提交于 2019-11-28 03:55:28
cmh

Yes, this program has likely used less than a millsecond. Try using microsecond resolution with timeval.

e.g:

#include <sys/time.h>

struct timeval stop, start;
gettimeofday(&start, NULL);
//do stuff
gettimeofday(&stop, NULL);
printf("took %lu\n", stop.tv_usec - start.tv_usec);
printf("Duration MS %'.3f\n", (double) (stop.tv_sec - start.tv_sec) * 1000 + (double) (stop.tv_usec - start.tv_usec) / 1000);

You can then query the difference (in microseconds) between stop.tv_usec - start.tv_usec. Note that this will only work for subsecond times (as tv_usec will loop). For the general case use a combination of tv_sec and tv_usec.

Edit 2016-08-19

A more appropriate approach on system with clock_gettime support would be:

struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC_RAW, &start);
//do stuff
clock_gettime(CLOCK_MONOTONIC_RAW, &end);

uint64_t delta_us = (end.tv_sec - start.tv_sec) * 1000000 + (end.tv_nsec - start.tv_nsec) / 1000;

A couple of things might affect the results you're seeing:

  1. You're treating clock_t as a floating-point type, I don't think it is.
  2. You might be expecting (1^4) to do something else than compute the bitwise XOR of 1 and 4., i.e. it's 5.
  3. Since the XOR is of constants, it's probably folded by the compiler, meaning it doesn't add a lot of work at runtime.
  4. Since the output is buffered (it's just formatting the string and writing it to memory), it completes very quickly indeed.

You're not specifying how fast your machine is, but it's not unreasonable for this to run very quickly on modern hardware, no.

If you have it, try adding a call to sleep() between the start/stop snapshots. Note that sleep() is POSIX though, not standard C.

This code snippet can be used for displaying time in seconds,milliseconds and microseconds:

#include <sys/time.h>

struct timeval start, stop;
double secs = 0;

gettimeofday(&start, NULL);

// Do stuff  here

gettimeofday(&stop, NULL);
secs = (double)(stop.tv_usec - start.tv_usec) / 1000000 + (double)(stop.tv_sec - start.tv_sec);
printf("time taken %f\n",secs);

You can use gettimeofday() together with the timedifference_msec() function below to calculate the number of milliseconds elapsed between two samples:

#include <sys/time.h>
#include <stdio.h>

float timedifference_msec(struct timeval t0, struct timeval t1)
{
    return (t1.tv_sec - t0.tv_sec) * 1000.0f + (t1.tv_usec - t0.tv_usec) / 1000.0f;
}

int main(void)
{
   struct timeval t0;
   struct timeval t1;
   float elapsed;

   gettimeofday(&t0, 0);
   /* ... YOUR CODE HERE ... */
   gettimeofday(&t1, 0);

   elapsed = timedifference_msec(t0, t1);

   printf("Code executed in %f milliseconds.\n", elapsed);

   return 0;
}

Note that, when using gettimeofday(), you need to take seconds into account even if you only care about microsecond differences because tv_usec will wrap back to zero every second and you have no way of knowing beforehand at which point within a second each sample is obtained.

Here is what I write to get the timestamp in millionseconds.

#include<sys/time.h>

long long timeInMilliseconds(void) {
    struct timeval tv;

    gettimeofday(&tv,NULL);
    return (((long long)tv.tv_sec)*1000)+(tv.tv_usec/1000);
}

From man clock:

The clock() function returns an approximation of processor time used by the program.

So there is no indication you should treat it as milliseconds. Some standards require precise value of CLOCKS_PER_SEC, so you could rely on it, but I don't think it is advisable. Second thing is that, as @unwind stated, it is not float/double. Man times suggests that will be an int. Also note that:

this function will return the same value approximately every 72 minutes

And if you are unlucky you might hit the moment it is just about to start counting from zero, thus getting negative or huge value (depending on whether you store the result as signed or unsigned value).

This:

printf("\n\n%6.3f", stop);

Will most probably print garbage as treating any int as float is really not defined behaviour (and I think this is where most of your problem comes). If you want to make sure you can always do:

printf("\n\n%6.3f", (double) stop);

Though I would rather go for printing it as long long int at first:

printf("\n\n%lldf", (long long int) stop);

Modern processors are too fast to register the running time. Hence it may return zero. In this case, the time you started and ended is too small and therefore both the times are the same after round of.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!