I can\'t get my head around this, which is more random?
rand()
OR:
rand() * rand()
I´m f
The accepted answer is quite lovely, but there's another way to answer your question. PachydermPuncher's answer already takes this alternative approach, and I'm just going to expand it out a little.
The easiest way to think about information theory is in terms of the smallest unit of information, a single bit.
In the C standard library, rand()
returns an integer in the range 0 to RAND_MAX
, a limit that may be defined differently depending on the platform. Suppose RAND_MAX
happens to be defined as 2^n - 1
where n
is some integer (this happens to be the case in Microsoft's implementation, where n
is 15). Then we would say that a good implementation would return n
bits of information.
Imagine that rand()
constructs random numbers by flipping a coin to find the value of one bit, and then repeating until it has a batch of 15 bits. Then the bits are independent (the value of any one bit does not influence the likelihood of other bits in the same batch have a certain value). So each bit considered independently is like a random number between 0 and 1 inclusive, and is "evenly distributed" over that range (as likely to be 0 as 1).
The independence of the bits ensures that the numbers represented by batches of bits will also be evenly distributed over their range. This is intuitively obvious: if there are 15 bits, the allowed range is zero to 2^15 - 1
= 32767. Every number in that range is a unique pattern of bits, such as:
010110101110010
and if the bits are independent then no pattern is more likely to occur than any other pattern. So all possible numbers in the range are equally likely. And so the reverse is true: if rand()
produces evenly distributed integers, then those numbers are made of independent bits.
So think of rand()
as a production line for making bits, which just happens to serve them up in batches of arbitrary size. If you don't like the size, break the batches up into individual bits, and then put them back together in whatever quantities you like (though if you need a particular range that is not a power of 2, you need to shrink your numbers, and by far the easiest way to do that is to convert to floating point).
Returning to your original suggestion, suppose you want to go from batches of 15 to batches of 30, ask rand()
for the first number, bit-shift it by 15 places, then add another rand()
to it. That is a way to combine two calls to rand()
without disturbing an even distribution. It works simply because there is no overlap between the locations where you place the bits of information.
This is very different to "stretching" the range of rand()
by multiplying by a constant. For example, if you wanted to double the range of rand()
you could multiply by two - but now you'd only ever get even numbers, and never odd numbers! That's not exactly a smooth distribution and might be a serious problem depending on the application, e.g. a roulette-like game supposedly allowing odd/even bets. (By thinking in terms of bits, you'd avoid that mistake intuitively, because you'd realise that multiplying by two is the same as shifting the bits to the left (greater significance) by one place and filling in the gap with zero. So obviously the amount of information is the same - it just moved a little.)
Such gaps in number ranges can't be griped about in floating point number applications, because floating point ranges inherently have gaps in them that simply cannot be represented at all: an infinite number of missing real numbers exist in the gap between each two representable floating point numbers! So we just have to learn to live with gaps anyway.
As others have warned, intuition is risky in this area, especially because mathematicians can't resist the allure of real numbers, which are horribly confusing things full of gnarly infinities and apparent paradoxes.
But at least if you think it terms of bits, your intuition might get you a little further. Bits are really easy - even computers can understand them.
Although the previous answers are right whenever you try to spot the randomness of a pseudo-random variable or its multiplication, you should be aware that while Random() is usually uniformly distributed, Random() * Random() is not.
This is a uniform random distribution sample simulated through a pseudo-random variable:
BarChart[BinCounts[RandomReal[{0, 1}, 50000], 0.01]]
While this is the distribution you get after multiplying two random variables:
BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] *
RandomReal[{0, 1}, 50000], {50000}], 0.01]]
So, both are “random”, but their distribution is very different.
While 2 * Random() is uniformly distributed:
BarChart[BinCounts[2 * RandomReal[{0, 1}, 50000], 0.01]]
Random() + Random() is not!
BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] +
RandomReal[{0, 1}, 50000], {50000}], 0.01]]
The Central Limit Theorem states that the sum of Random() tends to a normal distribution as terms increase.
With just four terms you get:
BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000] +
Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000],
{50000}],
0.01]]
And here you can see the road from a uniform to a normal distribution by adding up 1, 2, 4, 6, 10 and 20 uniformly distributed random variables:
Edit
A few credits
Thanks to Thomas Ahle for pointing out in the comments that the probability distributions shown in the last two images are known as the Irwin-Hall distribution
Thanks to Heike for her wonderful torn[] function
The obligatory xkcd ...
Most rand() implementations have some period. I.e. after some enormous number of calls the sequence repeats. The sequence of outputs of rand() * rand()
repeats in half the time, so it is "less random" in that sense.
Also, without careful construction, performing arithmetic on random values tends to cause less randomness. A poster above cited "rand()
+ rand()
+ rand()
..." (k times, say) which will in fact tend to k times the mean value of the range of values rand()
returns. (It's a random walk with steps symmetric about that mean.)
Assume for concreteness that your rand() function returns a uniformly distributed random real number in the range [0,1). (Yes, this example allows infinite precision. This won't change the outcome.) You didn't pick a particular language and different languages may do different things, but the following analysis holds with modifications for any non-perverse implementation of rand(). The product rand() * rand()
is also in the range [0,1) but is no longer uniformly distributed. In fact, the product is as likely to be in the interval [0,1/4) as in the interval [1/4,1). More multiplication will skew the result even further toward zero. This makes the outcome more predictable. In broad strokes, more predictable == less random.
Pretty much any sequence of operations on uniformly random input will be nonuniformly random, leading to increased predictability. With care, one can overcome this property, but then it would have been easier to generate a uniformly distributed random number in the range you actually wanted rather than wasting time with arithmetic.