What is the complexity of converting a very large n-bit number to a decimal representation?
My thought is that the elementary algorithm of repeated integer division,
Naive base-conversion as you described takes quadratic time; you do about n
bigint-by-smallint divisions, most of which take time linear in the size of the n-bit bigint.
You can do base conversion in O(M(n) log(n)) time, however, by picking a power of target-base that's roughly the square root of the to-be-converted number, doing divide-and-remainder by it (which can be done in O(M(n)) time via Newton's method), and recursing on the two halves.