I am encoding large integers into an array of size_t
. I already have the other operations working (add, subtract, multiply); as well as division by a single dig
The GMP library is usually a good reference for good algorithms. Their documented algorithms for division mainly depend on choosing a very large base, so that you're dividing a 4 digit number by a 2 digit number, and then proceed via long division.
Long division will require computing 2 digit by 1 digit quotients; this can either be done recursively, or by precomputing an inverse and estimating the quotient as you would with Barrett reduction.
When dividing a 2n
-bit number by an n
-bit number, the recursive version costs O(M(n) log(n))
, where M(n)
is the cost of multiplying n
-bit numbers.
The version using Barrett reduction will cost O(M(n))
if you use Newton's algorithm to compute the inverse, but according to GMP's documentation, the hidden constant is a lot larger, so this method is only preferable for very large divisions.
In more detail, the core algorithm behind most division algorithms is an "estimated quotient with reduction" calculation, computing (q,r)
so that
x = qy + r
but without the restriction that 0 <= r < y
. The typical loop is
q
of x/y
r = x - qy
r
is in some desired intervalr
is too big, then repeat with r
in place of x
.The quotient of x/y
will be the sum of all the q
s produced, and the final value of r
will be the true remainder.
Schoolbook long division, for example, is of this form. e.g. step 3 covers those cases where the digit you guessed was too big or too small, and you adjust it to get the right value.
The divide and conquer approach estimates the quotient of x/y
by computing x'/y'
where x'
and y'
are the leading digits of x
and y
. There is a lot of room for optimization by adjusting their sizes, but IIRC you get best results if x'
is twice as many digits of y'
.
The multiply-by-inverse approach is, IMO, the simplest if you stick to integer arithmetic. The basic method is
y
with m = floor(2^k / y)
x/y
with q = 2^(i+j-k) floor(floor(x / 2^i) m / 2^j)
In fact, practical implementations can tolerate additional error in m
if it means you can use a faster reciprocal implementation.
The error is a pain to analyze, but if I recall the way to do it, you want to choose i
and j
so that x ~ 2^(i+j)
due to how errors accumulate, and you want to choose x / 2^i ~ m^2
to minimize the overall work.
The ensuing reduction will have r ~ max(x/m, y)
, so that gives a rule of thumb for choosing k
: you want the size of m
to be about the number of bits of quotient you compute per iteration — or equivalently the number of bits you want to remove from x
per iteration.
I do not know the multiplicative inverse algorithm but it sounds like modification of Montgomery Reduction or Barrett's Reduction.
I do bigint divisions a bit differently.
See bignum division. Especially take a look at the approximation divider and the 2 links there. One is my fixed point divider and the others are fast multiplication algos (like karatsuba,Schönhage-Strassen on NTT) with measurements, and a link to my very fast NTT implementation for 32bit Base.
I'm not sure if the inverse multiplicant is the way.
It is mostly used for modulo operation where the divider is constant. I'm afraid that for arbitrary divisions the time and operations needed to acquire bigint inverse can be bigger then the standard divisions itself, but as I am not familiar with it I could be wrong.
The most common divider in use I saw in implemetations are Newton–Raphson division which is very similar to approximation divider in the link above.
Approximation/iterative dividers usually use multiplication which define their speed.
For small enough numbers is usually long binary division and 32/64bit digit base division fast enough if not fastest: usually they have small overhead, and let n
be the max value processed (not the number of digits!)
Binary division example:
Is O(log32(n).log2(n)) = O(log^2(n))
.
It loops through all significant bits. In each iteration you need to compare, sub, add, bitshift
. Each of those operations can be done in log32(n)
, and log2(n)
is the number of bits.
Here example of binary division from one of my bigint templates (C++):
template <DWORD N> void uint<N>::div(uint &c,uint &d,uint a,uint b)
{
int i,j,sh;
sh=0; c=DWORD(0); d=1;
sh=a.bits()-b.bits();
if (sh<0) sh=0; else { b<<=sh; d<<=sh; }
for (;;)
{
j=geq(a,b);
if (j)
{
c+=d;
sub(a,a,b);
if (j==2) break;
}
if (!sh) break;
b>>=1; d>>=1; sh--;
}
d=a;
}
N
is the number of 32 bit DWORD
s used to store a bigint number.
c = a / b
d = a % b
qeq(a,b)
is a comparison: a >= b
greater or equal (done in log32(n)=N
)0
for a < b
, 1
for a > b
, 2
for a == b
sub(c,a,b)
is c = a - b
The speed boost is gained from that this does not use multiplication (if you do not count the bit shift)
If you use digit with a big base like 2^32 (ALU blocks), then you can rewrite the whole in polynomial like style using 32bit build in ALU operations.
This is usually even faster then binary long division, the idea is to process each DWORD as a single digit, or recursively divide the used arithmetic by half until hit the CPU capabilities.
See division by half-bitwidth arithmetics
On top of all that while computing with bignums
If you have optimized basic operations, then the complexity can lower even further as sub-results get smaller with iterations (changing the complexity of basic operations) A nice example of that are NTT based multiplications.
The overhead can mess thing up.
Due to this the runtime sometimes does not copy the big O complexity, so you should always measure the tresholds and use faster approach for used bit-count to get the max performance and optimize what you can.