Hey! I was looking at this code at http://www.gnu.org/software/m68hc11/examples/primes_8c-source.html
I noticed that in some situations they used hex numbers, like i
In both cases you cite, the bit pattern of the number is important, not the actual number.
For example,
In the first case,
j
is going to be 1, then 2, 4, 8, 16, 32, 64 and finally 128 as the loop progresses.
In binary, that is,
0000:0001
, 0000:0010
, 0000:0100
, 0000:1000
, 0001:0000
, 0010:0000
, 0100:0000
and 1000:0000
.
There's no option for binary constants in C or C++, but it's a bit clearer in Hex:
0x01
, 0x02
, 0x04
, 0x08
, 0x10
, 0x20
, 0x40
, and 0x80
.
In the second example,
the goal was to remove the lower two bytes of the value.
So given a value of 1,234,567,890 we want to end up with 1,234,567,168.
In hex, it's clearer: start with 0x4996:02d2
, end with 0x4996:0000
.
Generally the use of Hex numbers instead of Decimal it's because the computer works with bits (binary numbers) and when you're working with bits also is more understandable to use Hexadecimal numbers, because is easier going from Hex to binary that from Decimal to binary.
OxFF = 1111 1111 ( F = 1111 )
but
255 = 1111 1111
because
255 / 2 = 127 (rest 1)
127 / 2 = 63 (rest 1)
63 / 2 = 31 (rest 1)
... etc
Can you see that? It's much more simple to pass from Hex to binary.
Hex, or hexadecimal, numbers represent 4 bits of data, 0 to 15 or in HEX 0 to F. Two hex values represent a byte.
There's a direct mapping between hex (or octal for that matter) digits and the underlying bit patterns, which is not the case with decimal. A decimal '9' represents something different with respect to bit patterns depending on what column it is in and what numbers surround it - it doesn't have a direct relationship to a bit pattern. In hex, a '9' always means '1001', no matter which column. 9 = '1001', 95 = '*1001*0101' and so forth.
As a vestige of my 8-bit days I find hex a convenient shorthand for anything binary. Bit twiddling is a dying skill. Once (about 10 years ago) I saw a third year networking paper at university where only 10% (5 out of 50 or so) of the people in the class could calculate a bit-mask.
Looking at the file, that's some pretty groady code. Hope you are good at C and not using it as a tutorial...
Hex is useful when you're directly working at the bit level or just above it. E.g, working on a driver where you're looking directly at the bits coming in from a device and twiddling the results so that someone else can read a coherent result. It's a compact fairly easy-to-read representation of binary.
There are 8 bits in a byte. Hex, base 16, is terse. Any possible byte value is expressed using two characters from the collection 0..9, plus a,b,c,d,e,f.
Base 256 would be more terse. Every possible byte could have its own single character, but most human languages don't use 256 characters, so Hex is the winner.
To understand the importance of being terse, consider that back in the 1970's, when you wanted to examine your megabyte of memory, it was printed out in hex. The printout would use several thousand pages of big paper. Octal would have wasted even more trees.