问题
So I am trying to program a simple tick-based game. I write in C++ on a linux machine. The code below illustrates what I'm trying to accomplish.
for (unsigned int i = 0; i < 40; ++i)
{
functioncall();
sleep(1000); // wait 1 second for the next function call
}
Well, this doesn't work. It seems that it sleeps for 40 seconds, then prints out whatever the result is from the function call.
I also tried creating a new function called delay, and it looked like this:
void delay(int seconds)
{
time_t start, current;
time(&start);
do
{
time(¤t);
}
while ((current - start) < seconds);
}
Same result here. Anybody?
回答1:
To reiterate on what has already been stated by others with a concrete example:
Assuming you're using std::cout
for output, you should call std::cout.flush();
right before the sleep command. See this MS knowledgebase article.
回答2:
sleep(n) waits for n seconds, not n microseconds. Also, as mentioned by Bart, if you're writing to stdout, you should flush the stream after each write - otherwise, you won't see anything until the buffer is flushed.
回答3:
So I am trying to program a simple tick-based game. I write in C++ on a linux machine.
if functioncall()
may take a considerable time then your ticks won't be equal if you sleep the same amount of time.
You might be trying to do this:
while 1: // mainloop
functioncall()
tick() # wait for the next tick
Here tick()
sleeps approximately delay - time_it_takes_for(functioncall)
i.e., the longer functioncall()
takes the less time tick()
sleeps.
sleep()
sleeps an integer number of seconds. You might need a finer time resolution. You could use clock_nanosleep() for that.
Example Clock::tick() implementation
// $ g++ *.cpp -lrt && time ./a.out
#include <iostream>
#include <stdio.h> // perror()
#include <stdlib.h> // ldiv()
#include <time.h> // clock_nanosleep()
namespace {
class Clock {
const long delay_nanoseconds;
bool running;
struct timespec time;
const clockid_t clock_id;
public:
explicit Clock(unsigned fps) : // specify frames per second
delay_nanoseconds(1e9/fps), running(false), time(),
clock_id(CLOCK_MONOTONIC) {}
void tick() {
if (clock_nanosleep(clock_id, TIMER_ABSTIME, nexttick(), 0)) {
// interrupted by a signal handler or an error
perror("clock_nanosleep");
exit(EXIT_FAILURE);
}
}
private:
struct timespec* nexttick() {
if (not running) { // initialize `time`
running = true;
if (clock_gettime(clock_id, &time)) {
//process errors
perror("clock_gettime");
exit(EXIT_FAILURE);
}
}
// increment `time`
// time += delay_nanoseconds
ldiv_t q = ldiv(time.tv_nsec + delay_nanoseconds, 1000000000);
time.tv_sec += q.quot;
time.tv_nsec = q.rem;
return &time;
}
};
}
int main() {
Clock clock(20);
char arrows[] = "\\|/-";
for (int nframe = 0; nframe < 100; ++nframe) { // mainloop
// process a single frame
std::cout << arrows[nframe % (sizeof(arrows)-1)] << '\r' << std::flush;
clock.tick(); // wait for the next tick
}
}
Note: I've used std::flush()
to update the output immediately.
If you run the program it should take about 5 seconds (100 frames, 20 frames per second).
回答4:
I guess on linux u have to use usleep()
and it must be found in ctime
And in windows you can use delay(), sleep(), msleep()
来源:https://stackoverflow.com/questions/9616957/delay-execution-1-second