问题
A lot of times I see flag enum declarations that use hexadecimal values. For example:
[Flags]
public enum MyEnum
{
None = 0x0,
Flag1 = 0x1,
Flag2 = 0x2,
Flag3 = 0x4,
Flag4 = 0x8,
Flag5 = 0x10
}
When I declare an enum, I usually declare it like this:
[Flags]
public enum MyEnum
{
None = 0,
Flag1 = 1,
Flag2 = 2,
Flag3 = 4,
Flag4 = 8,
Flag5 = 16
}
Is there a reason or rationale to why some people choose to write the value in hexadecimal rather than decimal? The way I see it, it's easier to get confused when using hex values and accidentally write Flag5 = 0x16
instead of Flag5 = 0x10
.
回答1:
Rationales may differ, but an advantage I see is that hexadecimal reminds you: "Okay, we're not dealing with numbers in the arbitrary human-invented world of base ten anymore. We're dealing with bits - the machine's world - and we're gonna play by its rules." Hexadecimal is rarely used unless you're dealing with relatively low-level topics where the memory layout of data matters. Using it hints at the fact that that's the situation we're in now.
Also, i'm not sure about C#, but I know that in C x << y
is a valid compile-time constant.
Using bit shifts seems the most clear:
[Flags]
public enum MyEnum
{
None = 0,
Flag1 = 1 << 0,
Flag2 = 1 << 1,
Flag3 = 1 << 2,
Flag4 = 1 << 3,
Flag5 = 1 << 4
}
回答2:
It makes it easy to see that these are binary flags.
None = 0x0, // == 00000
Flag1 = 0x1, // == 00001
Flag2 = 0x2, // == 00010
Flag3 = 0x4, // == 00100
Flag4 = 0x8, // == 01000
Flag5 = 0x10 // == 10000
Though the progression makes it even clearer:
Flag6 = 0x20 // == 00100000
Flag7 = 0x40 // == 01000000
Flag8 = 0x80 // == 10000000
回答3:
I think it's just because the sequence is always 1,2,4,8 and then add a 0.
As you can see:
0x1 = 1
0x2 = 2
0x4 = 4
0x8 = 8
0x10 = 16
0x20 = 32
0x40 = 64
0x80 = 128
0x100 = 256
0x200 = 512
0x400 = 1024
0x800 = 2048
and so on, as long as you remember the sequence 1-2-4-8 you can build all the subsequent flags without having to remember the powers of 2
回答4:
Because [Flags]
means that the enum is really a bitfield. With [Flags]
you can use the bitwise AND (&
) and OR (|
) operators to combine the flags. When dealing with binary values like this, it is almost always more clear to use hexadecimal values. This is the very reason we use hexadecimal in the first place. Each hex character corresponds to exactly one nibble (four bits). With decimal, this 1-to-4 mapping does not hold true.
回答5:
Because there is a mechanical, simple way to double a power-of-two in hex. In decimal, this is hard. It requires long multiplication in your head. In hex it is a simple change. You can carry this out all the way up to 1UL << 63
which you can't do in decimal.
回答6:
Because it is easier to follow for humans where the bits are in the flag. Each hexadecimal digit can fit a 4 bit binary.
0x0 = 0000
0x1 = 0001
0x2 = 0010
0x3 = 0011
... and so on
0xF = 1111
Typically you want your flags to not overlap bits, the easiest way of doing and visualizing it is using hexadecimal values to declare your flags.
So, if you need flags with 16 bits you will use 4 digit hexadecimal values and that way you can avoid erroneous values:
0x0001 //= 1 = 000000000000 0001
0x0002 //= 2 = 000000000000 0010
0x0004 //= 4 = 000000000000 0100
0x0008 //= 8 = 000000000000 1000
...
0x0010 //= 16 = 0000 0000 0001 0000
0x0020 //= 32 = 0000 0000 0010 0000
...
0x8000 //= 32768 = 1000 0000 0000 0000
来源:https://stackoverflow.com/questions/13222671/why-are-flag-enums-usually-defined-with-hexadecimal-values