This is most likely a machine dependent issue but I can\'t figure out what could be wrong.
#include
#include
#include
rand()
is quite bad, avoid it if possible. In any good RNG the first values will be indistinguishable from random even when the seed is close (hamming distance). In rand
this is not the case.rand()
multiple times instead of reseeding-calling-reseeding.For example of 2, consider:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(int argc, char** argv) {
int t=time(NULL);
srand(t);
for(int i=0; i < 10; i++) {
float r = (float)rand()/(float)(RAND_MAX);
printf("%f\n", r);
}
}
With the result:
0.460600
0.310486
0.339473
0.519799
0.258825
0.072276
0.749423
0.552250
0.665374
0.939103
It's still a bad RNG but at least the range is better when you allow it to use the internal state instead of giving it another similar seed.
This is exactly what you should expect. There's no such thing as "a random number". There are only sequences of numbers with a random distribution. The rand()
function generates such sequences, but you're not giving it a chance to, because you keep re-seeding it. The first number generated by rand()
may very well be just some function of the seed, or the seed itself. Some rand()
functions might hash the seed to hide this, but that doesn't really make them any better, because the contract of rand()
is to produce a random sequence.
If you need a sequence of random numbers that survives running multiple programs, you'll have to do something like (a) Write a program that calls srand()
once, then calls rand()
many times, and have your other programs ask for random numbers from that program over IPC; (b) Use something like /dev/urandom
; (c) Use something like random.org
.