Delay execution 1 second

后端 未结 4 464
青春惊慌失措
青春惊慌失措 2020-12-20 23:51

So I am trying to program a simple tick-based game. I write in C++ on a linux machine. The code below illustrates what I\'m trying to accomplish.

for (unsign         


        
相关标签:
4条回答
  • 2020-12-21 00:17

    sleep(n) waits for n seconds, not n microseconds. Also, as mentioned by Bart, if you're writing to stdout, you should flush the stream after each write - otherwise, you won't see anything until the buffer is flushed.

    0 讨论(0)
  • 2020-12-21 00:17

    So I am trying to program a simple tick-based game. I write in C++ on a linux machine.

    if functioncall() may take a considerable time then your ticks won't be equal if you sleep the same amount of time.

    You might be trying to do this:

    while 1: // mainloop
       functioncall()
       tick() # wait for the next tick
    

    Here tick() sleeps approximately delay - time_it_takes_for(functioncall) i.e., the longer functioncall() takes the less time tick() sleeps.

    sleep() sleeps an integer number of seconds. You might need a finer time resolution. You could use clock_nanosleep() for that.

    Example Clock::tick() implementation

    // $ g++ *.cpp -lrt && time ./a.out
    #include <iostream>
    #include <stdio.h>        // perror()
    #include <stdlib.h>        // ldiv()
    #include <time.h>        // clock_nanosleep()
    
    namespace {
      class Clock {
        const long delay_nanoseconds;
        bool running;
        struct timespec time;
        const clockid_t clock_id;
    
      public:
        explicit Clock(unsigned fps) :  // specify frames per second
          delay_nanoseconds(1e9/fps), running(false), time(),
          clock_id(CLOCK_MONOTONIC) {}
    
        void tick() {
          if (clock_nanosleep(clock_id, TIMER_ABSTIME, nexttick(), 0)) {
            // interrupted by a signal handler or an error
            perror("clock_nanosleep");
            exit(EXIT_FAILURE);
          }
        }
      private:
        struct timespec* nexttick() {
          if (not running) { // initialize `time`
            running = true;
            if (clock_gettime(clock_id, &time)) {
              //process errors
              perror("clock_gettime");
              exit(EXIT_FAILURE);
            }
          }
          // increment `time`
          // time += delay_nanoseconds
          ldiv_t q = ldiv(time.tv_nsec + delay_nanoseconds, 1000000000);
          time.tv_sec  += q.quot;
          time.tv_nsec = q.rem;
          return &time;
        }
      };
    }
    
    int main() {
      Clock clock(20);
      char arrows[] = "\\|/-";
      for (int nframe = 0; nframe < 100; ++nframe) { // mainloop
        // process a single frame
        std::cout << arrows[nframe % (sizeof(arrows)-1)] << '\r' << std::flush;
        clock.tick(); // wait for the next tick
      }
    }
    

    Note: I've used std::flush() to update the output immediately.

    If you run the program it should take about 5 seconds (100 frames, 20 frames per second).

    0 讨论(0)
  • 2020-12-21 00:23

    I guess on linux u have to use usleep() and it must be found in ctime

    And in windows you can use delay(), sleep(), msleep()

    0 讨论(0)
  • 2020-12-21 00:37

    To reiterate on what has already been stated by others with a concrete example:

    Assuming you're using std::cout for output, you should call std::cout.flush(); right before the sleep command. See this MS knowledgebase article.

    0 讨论(0)
提交回复
热议问题