This is a homework question that I am stuck with.
Consider unsigned integer representation. How many bits will be required to store a decimal number
The formula for the number of binary bits required to store n integers (for example, 0 to n - 1) is:
and round up.
For example, for values -128 to 127 (signed byte) or 0 to 255 (unsigned byte), the number of integers is 256, so n is 256, giving 8 from the above formula.
For 0 to n, use n + 1 in the above formula (there are n + 1 integers).
On your calculator, loge may just be labelled log or ln (natural logarithm).
Ok to generalize the technique of how many bits you need to represent a number is done this way. You have R symbols for a representation and you want to know how many bits, solve this equation R=2^n or log2(R)=n. Where n is the numbers of bits and R is the number of symbols for the representation.
For the decimal number system R=9 so we solve 9=2^n, the answer is 3.17 bits per decimal digit. Thus a 3 digit number will need 9.51 bits or 10. A 1000 digit number needs 3170 bits
There are a lot of answers here, but I'll add my approach since I found this post while working on the same problem.
Starting with what we know here are the number from 0 to 16.
Number encoded in bits minimum number of bits to encode
0 000000 1
1 000001 1
2 000010 2
3 000011 2
4 000100 3
5 000101 3
6 000110 3
7 000111 3
8 001000 4
9 001001 4
10 001010 4
11 001011 4
12 001100 4
13 001101 4
14 001110 4
15 001111 4
16 010000 5
looking at the breaks, it shows this table
number <= number of bits
1 0
3 2
7 3
15 4
So, now how do we compute the pattern?
Remember that log base 2 (n) = log base 10 (n) / log base 10 (2)
number logb10 (n) logb2 (n) ceil[logb2(n)]
1 0 0 0 (special case)
3 0.477 1.58 2
7 0.845 2.807 3
8 0.903 3 3 (special case)
15 1.176 3.91 4
16 1.204 4 4 (special case)
31 1.491 4.95 5
63 1.799 5.98 6
Now the desired result matching the first table. Notice how also some values are special cases. 0 and any number which is a powers of 2. These values dont change when you apply ceiling so you know you need to add 1 to get the minimum bit field length.
To account for the special cases add one to the input. The resulting code implemented in python is:
from math import log
from math import ceil
def min_num_bits_to_encode_number(a_number):
a_number=a_number+1 # adjust by 1 for special cases
# log of zero is undefined
if 0==a_number:
return 0
num_bits = int(ceil(log(a_number,2))) # logbase2 is available
return (num_bits)
let its required n bit then 2^n=(base)^digit and then take log and count no. for n
Well, you just have to calculate the range for each case and find the lowest power of 2 that is higher than that range.
For instance, in i), 3 decimal digits -> 10^3 = 1000 possible numbers so you have to find the lowest power of 2 that is higher than 1000, which in this case is 2^10 = 1024 (10 bits).
Edit: Basically you need to find the number of possible numbers with the number of digits you have and then find which number of digits (in the other base, in this case base 2, binary) has at least the same possible numbers as the one in decimal.
To calculate the number of possibilities given the number of digits: possibilities=base^ndigits
So, if you have 3 digits in decimal (base 10) you have 10^3=1000
possibilities. Then you have to find a number of digits in binary (bits, base 2) so that the number of possibilities is at least 1000, which in this case is 2^10=1024
(9 digits isn't enough because 2^9=512
which is less than 1000).
If you generalize this, you have: 2^nbits=possibilities <=> nbits=log2(possibilities)
Which applied to i) gives: log2(1000)=9.97
and since the number of bits has to be an integer, you have to round it up to 10.
The short answer is:
int nBits = ceil(log2(N));
That's simply because pow(2, nBits) is slightly bigger than N.