TL;DR - Inform the interviewer that re-inventing the wheel is a bad idea
Rather than entertain the interviewer's Code Golf question, I would have answered the interview question differently:
Brilliant engineers at Intel, AMD, ARM and other microprocessor manufacturers have agonized for decades as how to multiply 32 bit integers together in the fewest possible cycles, and in fact, are even able to produce the correct, full 64 bit result of multiplication of 32 bit integers without overflow.
(e.g. without pre-casting a
or b
to long
, a multiplication of 2 ints such as 123456728 * 23456789
overflows into a negative number)
In this respect, high level languages have only one job to do with integer multiplications like this, viz, to get the job done by the processor with as little fluff as possible.
Any amount of Code Golf to replicate such multiplication in software IMO is insanity.
There's undoubtedly many hacks
which could simulate multiplication, although many will only work on limited ranges of values a
and b
(in fact, none of the 3 methods listed by the OP perform bug-free for all values of a and b, even if we disregard the overflow problem). And all will be (orders of magnitude) slower than an IMUL
instruction.
For example, if either a
or b
is a positive power of 2, then bit shifting the other variable to the left by log can be done.
if (b == 2)
return a << 1;
if (b == 4)
return a << 2;
...
But this would be really tedious.
In the unlikely event of the *
operator really disappearing overnight from the Java language spec, next best, I would be to use existing libraries which contain multiplication functions, e.g. BigInteger.multiply(), for the same reasons - many years of critical thinking by minds brighter than mine has gone into producing, and testing, such libraries.
BigInteger.multiply
would obviously be reliable to 64 bits and beyond, although casting the result back to a 32 bit int would again invite overflow problems.
The problem with playing operator * Code Golf
There's inherent problems with all 3 of the solutions cited in the OP's question:
- Method A (loop) won't work if the first number
a
is negative.
for (int i = 0; i < a; i++) {
resultat += b;
}
Will return 0 for any negative value of a, because the loop continuation condition is never met
- In Method B, you'll run out of stack for large values of
b
in method 2, unless you refactor the code to allow for Tail Call Optimisation
multiplier(100, 1000000)
"main" java.lang.StackOverflowError
- And in Method 3, you'll get rounding errors with
log10
(not to mention the obvious problems with attempting to take a log of any number <= 0). e.g.
multiplier(2389, 123123);
returns 294140846
, but the actual answer is 294140847
(the last digits 9 x 3 mean the product must end in 7)
Even the answer using two consecutive double precision division operators is prone to rounding issues when re-casting the double result back to an integer:
static double multiply(double a, double b) {
return 0 == (int)b
? 0.0
: a / (1 / b);
}
e.g. for a value (int)multiply(1, 93)
returns 92, because multiply
returns 92.99999.... which is truncated with the cast back to a 32 bit integer.
And of course, we don't need to mention that many of these algorithms are O(N)
or worse, so the performance will be abysmal.