Uniform distribution of integers using floating point source

后端 未结 5 564
渐次进展
渐次进展 2021-01-06 00:23

The standard way to get a random integer in the range [0, n) in JavaScript - or any other language that only offers a random() function that returns a float in the range [0,

5条回答
  •  悲哀的现实
    2021-01-06 01:00

    If Math.random (or equivalent) generated a uniformly-distributed bit pattern out of those bit patterns corresponding to floating point numbers in the range [0, 1), then it would produce an extremely biased sample. There are as many representable floating point number in [0.25, 0.5) as there are in [0.5, 1.0), which is also the same number of representable values in [0.125, 0.25). And so on. In short, the uniformly-distributed bit patterns would result in only one out of a thousand values being between 0.5 and 1.0. (assuming double-precision floating point numbers.)

    Fortunately, that's not what Math.random does. One simple way of getting a uniformly distributed number (rather than bit pattern) is to generate a uniformly distributed bit pattern in [1.0, 2.0), and then subtract 1.0; that's a fairly common strategy.

    Regardless, the end result of Math.floor(Math.random() * n) is not quite uniformly distributed unless n is a power of 2, because of quantification bias. The number of possible floating point values which could be returned by Math.random is a power of 2, and if n is not a power of 2, then it is impossible to distribute the possible floating point values precisely evenly over all values of integers in [0, n). If Math.random returns a double-precision floating pointer number and n is not huge, this bias is small, but it certainly exists.

提交回复
热议问题