Why use hexadecimal constants?

后端 未结 11 932
挽巷
挽巷 2020-12-07 13:16

Sometimes I see Integer constants defined in hexadecimal, instead of decimal numbers. This is a small part I took from a GL10 class:

public static final int          


        
相关标签:
11条回答
  • 2020-12-07 14:02

    Readability when applying hexadecimal masks, for example.

    0 讨论(0)
  • 2020-12-07 14:04

    Would you rather write 0xFFFFFFFF or 4294967295?

    The first one much more clearly represents a 32 bit data type with all ones. Of course, many a seasoned programmer would recognize the latter pattern, and have a sneaking suspicion as to it's true meaning. However even in that case, it is much more prone to typing errors, etc.

    0 讨论(0)
  • 2020-12-07 14:04

    0xB62 equals 2914 :-)

    For developers it is much easier to mentally picture the bit pattern of a constant when it is presented in hexadecimal than when it is presented as an base 10 integer.

    This fact makes presentation in hexadecimal more suited for constants used in API's where bits and their positions (used as individual flags for instance) are relevant.

    0 讨论(0)
  • 2020-12-07 14:06

    It is likely for organizational and visual cleanliness. Base 16 has a much simpler relationship to binary than base 10, because in base 16 each digit corresponds to exactly four bits.

    Notice how in the above, the constants are grouped with many digits in common. If they were represented in decimal, bits in common would be less clear. If they instead had decimal digits in common, the bit patterns would not have the same degree of similarity.

    Also, in many situations it is desired to be able to bitwise-OR constants together to create a combination of flags. If the value of each constant is constrained to only have a subset of the bits non-zero, then this can be done in a way that can be re-separated. Using hex constants makes it clear which bits are non-zero in each value.

    There are two other reasonable possibilities: octal, or base 8 simply encodes 3 bits per digit. And then there is binary coded decimal, in which each digit requires four bits, but digit values above 9 are prohibited - that would be disadvantageous as it cannot represent all of the possibilities which binary can.

    0 讨论(0)
  • 2020-12-07 14:07

    There will be no performance gain between a decimal number and a hexadecimal number, because the code will be compiled to move byte constants which represent numbers.

    Computers don't do decimal, they do (at best) binary. Hexadecimal maps to binary very cleanly, but it requires a bit of work to convert a decimal number to binary.

    One place where hexadecimal shines is when you have a number of related items, where many are similar, yet slightly different.

    // These error flags tend to indicate that error flags probably
    // all start with 0x05..
    public static final int GL_STACK_UNDERFLOW = 0x0504;
    public static final int GL_OUT_OF_MEMORY = 0x0505;
    
    // These EXP flags tend to indicate that EXP flags probably
    // all start with 0x08..
    public static final int GL_EXP = 0x0800;
    public static final int GL_EXP2 = 0x0801;
    
    // These FOG flags tend to indicate that FOG flags probably
    // all start with 0x0B.., or maybe 0x0B^.
    public static final int GL_FOG_DENSITY = 0x0B62;
    public static final int GL_FOG_START = 0x0B63;
    public static final int GL_FOG_END = 0x0B64;
    public static final int GL_FOG_MODE = 0x0B65;
    

    With decimal numbers, one would be hard pressed to "notice" constant regions of bits across a large number of different, but related items.

    0 讨论(0)
提交回复
热议问题