Why ENOUGH is enough? (storing an int in a char array)

对着背影说爱祢 提交于 2019-12-04 09:58:40

If int type is 32-bit, how many bytes do you need to represent any number (without sign and null terminator)?

If int type is 32-bit the maximum int value is 2147483648 (assuming two's complement), that is 10 digits so 10 bytes needed for storage.

To know the number of bits in an int in a specific platform (e.g., 32 in our example) we can use CHAR_BIT * sizeof (int) == 32. Remember CHAR_BIT is the number of bits in a C byte and sizeof yields a size in bytes.

Then (32 - 1) / 3 == 10 so 10 bytes needed. You may also wonder how the author finds the value 3? Well the log base 2 of 10 is a little more than 3.

I assume that ENOUGH is computed in a conservative way because the final +2 takes into account the \0 null terminator (always present, that's fine) and the "-" minus sign (sometimes present). For positive values (and zero) you end up with one unused extra byte.

So, if ENOUGH is NOT computed as the strictly minimum number of bytes required to store the value why not use a fixed value of 12? (10 bytes for the number and 2 bytes for \0 and sign)

However:

CHAR_BIT * sizeof(int) is the exact number of bits to store an int in your machine.

-1 is because 1 bit is used for the sign (you "consume" 1 bit of information to store the sign, irrespective of the technique involved let it be two's complement, one's complement or naive sign storing)

/3 is because every 1 decimal digit takes at least 3 bits of information

CHAR_BIT is the number of bits in an char (most likely 8), sizeof(int) is 2 or 4, so ENOUGH is 7 or 12, which is enough space to save an int including the sign and the NULL terminator.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!