What is O value for naive random selection from finite set?

前端 未结 8 1851
终归单人心
终归单人心 2021-02-05 17:25

This question on getting random values from a finite set got me thinking...

It\'s fairly common for people to want to retrieve X unique values from a set of Y values.

相关标签:
8条回答
  • 2021-02-05 18:12

    Before being able to answer this question in details, lets define the framework. Suppose you have a collection {a1, a2, ..., an} of n distinct objects, and want to pick m distinct objects from this set, such that the probability of a given object aj appearing in the result is equal for all objects.

    If you have already picked k items, and radomly pick an item from the full set {a1, a2, ..., an}, the probability that the item has not been picked before is (n-k)/n. This means that the number of samples you have to take before you get a new object is (assuming independence of random sampling) geometric with parameter (n-k)/n. Thus the expected number of samples to obtain one extra item is n/(n-k), which is close to 1 if k is small compared to n.

    Concluding, if you need m unique objects, randomly selected, this algorithm gives you

    n/n + n/(n-1) + n/(n-2) + n/(n-3) + .... + n/(n-(m-1))

    which, as Alderath showed, can be estimated by

    m*n / (n-m+1).

    You can see a little bit more from this formula: * The expected number of samples to obtain a new unique element increases as the number of already chosen objects increases (which sounds logical). * You can expect really long computation times when m is close to n, especially if n is large.

    In order to obtain m unique members from the set, use a variant of David Knuth's algorithm for obtaining a random permutation. Here, I'll assume that the n objects are stored in an array.

    for i = 1..m
      k = randInt(i, n)
      exchange(i, k)
    end
    

    here, randInt samples an integer from {i, i+1, ... n}, and exchange flips two members of the array. You only need to shuffle m times, so the computation time is O(m), whereas the memory is O(n) (although you can adapt it to only save the entries such that a[i] <> i, which would give you O(m) on both time and memory, but with higher constants).

    0 讨论(0)
  • 2021-02-05 18:13

    If you're willing to make the assumption that your random number generator will always find a unique value before cycling back to a previously seen value for a given draw, this algorithm is O(m^2), where m is the number of unique values you are drawing.

    So, if you are drawing m values from a set of n values, the 1st value will require you to draw at most 1 to get a unique value. The 2nd requires at most 2 (you see the 1st value, then a unique value), the 3rd 3, ... the mth m. Hence in total you require 1 + 2 + 3 + ... + m = [m*(m+1)]/2 = (m^2 + m)/2 draws. This is O(m^2).

    Without this assumption, I'm not sure how you can even guarantee the algorithm will complete. It's quite possible (especially with a pseudo-random number generator which may have a cycle), that you will keep seeing the same values over and over and never get to another unique value.

    ==EDIT==

    For the average case:

    On your first draw, you will make exactly 1 draw. On your 2nd draw, you expect to make 1 (the successful draw) + 1/n (the "partial" draw which represents your chance of drawing a repeat) On your 3rd draw, you expect to make 1 (the successful draw) + 2/n (the "partial" draw...) ... On your mth draw, you expect to make 1 + (m-1)/n draws.

    Thus, you will make 1 + (1 + 1/n) + (1 + 2/n) + ... + (1 + (m-1)/n) draws altogether in the average case.

    This equals the sum from i=0 to (m-1) of [1 + i/n]. Let's denote that sum(1 + i/n, i, 0, m-1).

    Then:

    sum(1 + i/n, i, 0, m-1) = sum(1, i, 0, m-1) + sum(i/n, i, 0, m-1)
                            = m + sum(i/n, i, 0, m-1)
                            = m + (1/n) * sum(i, i, 0, m-1)
                            = m + (1/n)*[(m-1)*m]/2
                            = (m^2)/(2n) - (m)/(2n) + m 
    

    We drop the low order terms and the constants, and we get that this is O(m^2/n), where m is the number to be drawn and n is the size of the list.

    0 讨论(0)
  • 2021-02-05 18:14

    The worst case for this algorithm is clearly when you're choosing the full set of N items. This is equivalent to asking: On average, how many times must I roll an N-sided die before each side has come up at least once?

    Answer: N * HN, where HN is the Nth harmonic number,


    a value famously approximated by log(N).

    This means the algorithm in question is N log N.

    As a fun example, if you roll an ordinary 6-sided die until you see one of each number, it will take on average 6 H6 = 14.7 rolls.

    0 讨论(0)
  • 2021-02-05 18:16

    There's a beautiful O(n) algorithm for this. It goes as follows. Say you have n items, from which you want to pick m items. I assume the function rand() yields a random real number between 0 and 1. Here's the algorithm:

    items_left=n
    items_left_to_pick=m
    for j=1,...,n
        if rand()<=(items_left_to_pick/items_left)
            Pick item j
            items_left_to_pick=items_left_to_pick-1
        end
        items_left=items_left-1
    end
    

    It can be proved that this algorithm does indeed pick each subset of m items with equal probability, though the proof is non-obvious. Unfortunately, I don't have a reference handy at the moment.

    Edit The advantage of this algorithm is that it takes only O(m) memory (assuming the items are simply integers or can be generated on-the-fly) compared to doing a shuffle, which takes O(n) memory.

    0 讨论(0)
  • 2021-02-05 18:20

    Variables

    n = the total amount of items in the set
    m = the amount of unique values that are to be retrieved from the set of n items
    d(i) = the expected amount of tries needed to achieve a value in step i
    i = denotes one specific step. i ∈ [0, n-1]
    T(m,n) = expected total amount of tries for selecting m unique items from a set of n items using the naive algorithm

    Reasoning

    The first step, i=0, is trivial. No matter which value we choose, we get a unique one at the first attempt. Hence:

    d(0) = 1

    In the second step, i=1, we at least need 1 try (the try where we pick a valid unique value). On top of this, there is a chance that we choose the wrong value. This chance is (amount of previously picked items)/(total amount of items). In this case 1/n. In the case where we picked the wrong item, there is a 1/n chance we may pick the wrong item again. Multiplying this by 1/n, since that is the combined probability that we pick wrong both times, gives (1/n)2. To understand this, it is helpful to draw a decision tree. Having picked a non-unique item twice, there is a probability that we will do it again. This results in the addition of (1/n)3 to the total expected amounts of tries in step i=1. Each time we pick the wrong number, there is a chance we might pick the wrong number again. This results in:

    d(1) = 1 + 1/n + (1/n)2 + (1/n)3 + (1/n)4 + ...

    Similarly, in the general i:th step, the chance to pick the wrong item in one choice is i/n, resulting in:

    d(i) = 1 + i/n + (i/n)2 + (i/n)3 + (i/n)4 + ... =
    = sum( (i/n)k ), where k ∈ [0,∞]

    This is a geometric sequence and hence it is easy to compute it's sum:

    d(i) = (1 - i/n)-1

    The overall complexity is then computed by summing the expected amount of tries in each step:

    T(m,n) = sum ( d(i) ), where i ∈ [0,m-1] =
    = 1 + (1 - 1/n)-1 + (1 - 2/n)-1 + (1 - 3/n)-1 + ... + (1 - (m-1)/n)-1

    Extending the fractions in the series above by n, we get:

    T(m,n) = n/n + n/(n-1) + n/(n-2) + n/(n-3) + ... + n/(n-m+2) + n/(n-m+1)

    We can use the fact that:

    n/n ≤ n/(n-1) ≤ n/(n-2) ≤ n/(n-3) ≤ ... ≤ n/(n-m+2) ≤ n/(n-m+1)

    Since the series has m terms, and each term satisfies the inequality above, we get:

    T(m,n) ≤ n/(n-m+1) + n/(n-m+1) + n/(n-m+1) + n/(n-m+1) + ... + n/(n-m+1) + n/(n-m+1) =
    = m*n/(n-m+1)

    It might be(and probably is) possible to establish a slightly stricter upper bound by using some technique to evaluate the series instead of bounding by the rough method of (amount of terms) * (biggest term)

    Conclusion

    This would mean that the Big-O order is O(m*n/(n-m+1)). I see no possible way to simplify this expression from the way it is.

    Looking back at the result to check if it makes sense, we see that, if n is constant, and m gets closer and closer to n, the results will quickly increase, since the denominator gets very small. This is what we'd expect, if we for example consider the example given in the question about selecting "999,999 values from a set of 1,000,000". If we instead let m be constant and n grow really, really large, the complexity will converge towards O(m) in the limit n → ∞. This is also what we'd expect, since while chosing a constant number of items from a "close to" infinitely sized set the probability of choosing a previously chosen value is basically 0. I.e. We need m tries independently of n since there are no collisions.

    0 讨论(0)
  • 2021-02-05 18:20

    If you already have chosen i values then the probability that you pick a new one from a set of y values is

    (y-i)/y.
    

    Hence the expected number of trials to get (i+1)-th element is

    y/(y-i).
    

    Thus the expected number of trials to choose x unique element is the sum

     y/y + y/(y-1) + ... + y/(y-x+1)
    

    This can be expressed using harmonic numbers as

    y (Hy - Hy-x).

    From the wikipedia page you get the approximation

    Hx = ln(x) + gamma + O(1/x)

    Hence the number of necessary trials to pick x unique elements from a set of y elements is

    y (ln(y) - ln(y-x)) + O(y/(y-x)).
    

    If you need then you can get a more precise approximation by using a more precise approximation for Hx. In particular, when x is small it is possible to improve the result a lot.

    0 讨论(0)
提交回复
热议问题