Efficient random generator for very large range (in python)

前端 未结 3 439
你的背包
你的背包 2021-01-01 23:24

I am trying to create a generator that returns numbers in a given range that pass a particular test given by a function foo. However I would like the numbers to

相关标签:
3条回答
  • 2021-01-01 23:45

    The problem is basically generating a random permutation of the integers in the range 0..n-1.

    Luckily for us, these numbers have a very useful property: they all have a distinct value modulo n. If we can apply some mathemical operations to these numbers while taking care to keep each number distinct modulo n, it's easy to generate a permutation that appears random. And the best part is that we don't need any memory to keep track of numbers we've already generated, because each number is calculated with a simple formula.


    Examples of operations we can perform on every number x in the range include:

    • Addition: We can add any integer c to x.
    • Multiplication: We can multiply x with any number m that shares no prime factors with n.

    Applying just these two operations on the range 0..n-1 already gives quite satisfactory results:

    >>> n = 7
    >>> c = 1
    >>> m = 3
    >>> [((x+c) * m) % n for x in range(n)]
    [3, 6, 2, 5, 1, 4, 0]
    

    Looks random, doesn't it?

    If we generate c and m from a random number, it'll actually be random, too. But keep in mind that there is no guarantee that this algorithm will generate all possible permutations, or that each permutation has the same probability of being generated.


    Implementation

    The difficult part about the implementation is really just generating a suitable random m. I used the prime factorization code from this answer to do so.

    import random
    
    # credit for prime factorization code goes
    # to https://stackoverflow.com/a/17000452/1222951
    def prime_factors(n):
        gaps = [1,2,2,4,2,4,2,4,6,2,6]
        length, cycle = 11, 3
        f, fs, next_ = 2, [], 0
        while f * f <= n:
            while n % f == 0:
                fs.append(f)
                n /= f
            f += gaps[next_]
            next_ += 1
            if next_ == length:
                next_ = cycle
        if n > 1: fs.append(n)
        return fs
    
    def generate_c_and_m(n, seed=None):
        # we need to know n's prime factors to find a suitable multiplier m
        p_factors = set(prime_factors(n))
    
        def is_valid_multiplier(m):
            # m must not share any prime factors with n
            factors = prime_factors(m)
            return not p_factors.intersection(factors)
    
        # if no seed was given, generate random values for c and m
        if seed is None:
            c = random.randint(n)
            m = random.randint(1, 2*n)
        else:
            c = seed
            m = seed
    
        # make sure m is valid
        while not is_valid_multiplier(m):
            m += 1
    
        return c, m
    

    Now that we can generate suitable values for c and m, creating the permutation is trivial:

    def random_range(n, seed=None):
        c, m = generate_c_and_m(n, seed)
    
        for x in range(n):
            yield ((x + c) * m) % n
    

    And your generator function can be implemented as

    def MyGenerator(foo, num):
        for x in random_range(num):
            if foo(x):
                yield x
    
    0 讨论(0)
  • 2021-01-01 23:58

    That may be a case where the best algorithm depends on the value of num, so why not using 2 selectable algorithms wrapped in one generator ?

    you could mix your shuffle and set solutions with a threshold on the value of num. That's basically assembling your 2 first solutions in one generator:

    from random import shuffle,randint
    
    def MyGenerator(foo, num):
        if num < 100000 # has to be adjusted by experiments
          order = list(range(num))
          shuffle(order)
          for i in order:
              if foo(i):
                  yield i
        else:   # big values, few collisions with random generator 
          tried = set()
          while len(tried) < num:
            i = randint(0, num-1)
            if i in tried:
               continue
            tried.add(i)
            if foo(i):
               yield i
    

    The randint solution (for big values of num) works well because there aren't so many repeats in the random generator.

    0 讨论(0)
  • 2021-01-01 23:58

    Getting the best performance in Python is much trickier than in lower-level languages. For example, in C, you can often save a little bit in hot inner loops by replacing a multiplication by a shift. The overhead of python bytecode-orientation erases this. Of course, this changes again when you consider which variant of "python" you're targetting (pypy? numpy? cython?)- you really have to write your code based on which one you're using.

    But even more important is arranging operations to avoid serialized dependencies, since all CPUs are superscalar these days. Of course, real compilers know about this, but it still matters when choosing an algorithm.


    One of the easiest ways to gain a little bit over existing answers would be by by generating numbers in chunks using numpy.arange() and applying the ((x + c) * m) % n to the numpy ndarray directly. Every python-level loop that can be avoided helps.

    If the function can be applied directly to numpy ndarrays, that might even better. Of course, a sufficiently-small function in python will be dominated by function-call overhead anyway.


    The best fast random-number-generator today is PCG. I wrote a pure-python port here but concentrated on flexibility and ease-of-understanding rather than speed.

    Xoroshiro128+ is second-best-quality and faster, but less informative to study.

    Python's (and many others') default choice of Mersenne Twister is among the worst.

    (there's also something called splitmix64 which I don't know enough about to place - some people say it's better than xoroshiro128+, but it has a period problem - of course, you might want that here)

    Both default-PCG and xoroshiro128+ use a 2N-bit state to generate N-bit numbers. This is generally desirable, but means numbers will be repeated. PCG has alternate modes that avoid this, however.

    Of course, much of this depends on whether num is (close to) a power of 2. In theory, PCG variants can be created for any bit width, but currently only various word sizes are implemented since you'd need explicit masking. I'm not sure exactly how to generate the parameters for new bit sizes (perhaps it's in the paper?), but they can be tested simply by doing a period/2 jump and verifying that the value is different.

    Of course, if you're only making 200 calls to the RNG, you probably don't actually need to avoid duplicates on the math side.


    Alternatively, you could use an LFSR, which does exist for every bit size (though note that it never generates the all-zeros value (or equivalently, the all-ones value)). LFSRs are serial and (AFAIK) not jumpable, and thus can't be easily split across multiple tasks. Edit: I figured out that this is untrue, simply represent the advance step as a matrix, and exponentiate it to jump.

    Note that LFSRs do have the same obvious biases as simply generating numbers in sequential order based on a random start point - for example, if rng_outputs[a:b] all fail your foo function, then rng_outputs[b] will be much more likely as a first output regardless of starting point. PCG's "stream" parameter avoids this by not generating numbers in the same order.

    Edit2: I have completed what I thought was a "brief project" implementing LFSRs in python, including jumping, fully tested.

    0 讨论(0)
提交回复
热议问题