clock

How Do You Programmatically Set the Hardware Clock on Linux?

て烟熏妆下的殇ゞ 提交于 2019-11-27 01:40:40
问题 Linux provides the stime(2) call to set the system time. However, while this will update the system's time, it does not set the BIOS hardware clock to match the new system time. Linux systems typically sync the hardware clock with the system time at shutdown and at periodic intervals. However, if the machine gets power-cycled before one of these automatic syncs, the time will be incorrect when the machine restarts. How do you ensure that the hardware clock gets updated when you set the system

Will docker container auto sync time with the host machine?

时间秒杀一切 提交于 2019-11-26 23:35:31
Giving I already changed the timezone of docker container correctly. Do I need to install a NTP server inside the docker container to periodically sync the time or the container will sync the time from its host machine? If you are on OSX running boot2docker, see this issue: https://github.com/boot2docker/boot2docker/issues/290 Time synch becomes an issue because the boot2docker host has its time drift while your OS is asleep. Time synch with your docker container cannot be resolved by running your container with -v /etc/localtime:/etc/localtime:ro Instead, for now, you have to periodically run

How to get the precision of high_resolution_clock?

為{幸葍}努か 提交于 2019-11-26 23:01:06
问题 C++11 defines high_resolution_clock and it has the member types period and rep . But I can not figure out how I can get the precision of that clock. Or, if I may not get to the precision, can I somehow at least get a count in nanoseconds of the minimum representable time duration between ticks? probably using period ? #include <iostream> #include <chrono> void printPrec() { std::chrono::high_resolution_clock::rep x = 1; // this is not the correct way to initialize 'period': //high_resolution

Why do I see 400x outlier timings when calling clock_gettime repeatedly?

时间秒杀一切 提交于 2019-11-26 22:25:58
问题 I'm trying to measure execution time of some commands in c++ by using the physical clock, but I have run into a problem that the process of reading off the measurement from the physical clock on the computer can take a long time. Here is the code: #include <string> #include <cstdlib> #include <iostream> #include <math.h> #include <time.h> int main() { int64_t mtime, mtime2, m_TSsum, m_TSssum, m_TSnum, m_TSmax; struct timespec t0; struct timespec t1; int i,j; for(j=0;j<10;j++){ m_TSnum=0;m

C: Different implementation of clock() in Windows and other OS?

烈酒焚心 提交于 2019-11-26 21:40:28
问题 I had to write a very simple console program for university that had to measure the time required to make an input. Therefor I used clock() in front and after an fgets() call. When running on my Windows computer it worked perfectly. However when running on my friends Mac-Book and Linux-PC it gave extremely small results (a few micro seconds of time only). I tried the following code on all 3 OS: #include <stdio.h> #include <time.h> #include <unistd.h> void main() { clock_t t; printf("Sleeping

Android: DigitalClock remove seconds

不问归期 提交于 2019-11-26 21:38:26
问题 I used this code for adding a clock to my app: <DigitalClock android:id="@+id/digitalclock" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:textSize = "30sp" /> The problem is that it shows also seconds..there is a simple and fast way for hide those? I need just hours and minutes in hh:mm format instead of hh:mm:ss! any suggestions? Thanks! 回答1: Found the answer here, for anyone else looking for a working answer, here it

faster equivalent of gettimeofday

南楼画角 提交于 2019-11-26 21:30:51
In trying to build a very latency sensitive application, that needs to send 100s of messages a seconds, each message having the time field, we wanted to consider optimizing gettimeofday. Out first thought was rdtsc based optimization. Any thoughts ? Any other pointers ? Required accurancy of the time value returned is in milliseconds, but it isn't a big deal if the value is occasionally out of sync with the receiver for 1-2 milliseconds. Trying to do better than the 62 nanoseconds gettimeofday takes Have you actually benchmarked, and found gettimeofday to be unacceptably slow? At the rate of

Issue when scheduling tasks using clock() function

爷,独闯天下 提交于 2019-11-26 20:57:35
I would like to schedule tasks at different time intervals: at 0.1 sec, 0.9s .... 2s etc I use the clock() C++ function that returns the number of ticks since the beginning of the simulation and I convert the ticks number to seconds using CLOCKS_PER_SEC but I have noticed that the task isn't scheduled when the instant is a float, but when it's an integer it does. Here the portion of the code responsible for the scheduling: float goal = (float) clock() / CLOCKS_PER_SEC + 0.4 ; // initially (float) clock() / CLOCKS_PER_SEC = 0 ; if ((float) clock() / CLOCKS_PER_SEC == goal) do stuff ; In that

Is it legal for a C++ optimizer to reorder calls to clock()?

早过忘川 提交于 2019-11-26 19:14:29
问题 The C++ Programming Language 4th edition, page 225 reads: A compiler may reorder code to improve performance as long as the result is identical to that of the simple order of execution . Some compilers, e.g. Visual C++ in release mode, will reorder this code: #include <time.h> ... auto t0 = clock(); auto r = veryLongComputation(); auto t1 = clock(); std::cout << r << " time: " << t1-t0 << endl; into this form: auto t0 = clock(); auto t1 = clock(); auto r = veryLongComputation(); std::cout <<

Time in milliseconds in C

隐身守侯 提交于 2019-11-26 19:05:56
问题 Using the following code: #include<stdio.h> #include<time.h> int main() { clock_t start, stop; int i; start = clock(); for(i=0; i<2000;i++) { printf("%d", (i*1)+(1^4)); } printf("\n\n"); stop = clock(); //(double)(stop - start) / CLOCKS_PER_SEC printf("%6.3f", start); printf("\n\n%6.3f", stop); return 0; } I get the following output: