Consider this method that works well:
public static bool mightBePrime(int N) {
BigInteger a = rGen.Next (1, N-1);
return modExp (a, N - 1, N) == 1;
}
The naive implementation will fail on average 64 times before finding a valid BigInteger
within the specified range.
On the worst case, my implementation will retry on average only 0.5 times (read as: 50% of the times it will find a result on the first try).
Also, unlike with modular arithmetic, my implementation maintains a uniform distribution.
We must generate a random BigInteger
between min
and max
.
min > max
, we swap min
with max
[min, max]
to [0, max-min]
, this way we won't have to deal with the sign bitmax
contains (bytes.Length
)zeroBits
)bytes.Length
bytes< max
, at least zeroBits
bits from the most significant bit must be 0, so we use a zeroBitMask
to set them with a single bit-to-bit &
operation over the most significant byte, this will save a lot of time by reducing the change of generating a number out of our range> max
, and if so we try again[0, max-min]
to [min, max]
by adding min
to our resultAnd we have our number.