There are many situations (especially in low-level programming), where the binary layout of the data is important. For example: hardware/driver manipulation, network protocols,
From the C++14 standard (N3797 draft), section 9.6 [class.bit], paragraph 1:
Allocation of bit-fields within a class object is implementation-defined. Alignment of bit-fields is implementation-defined. Bit-fields are packed into some addressable allocation unit. [ Note: Bit-fields straddle allocation units on some machines and not on others. Bit-fields are assigned right-to-left on some machines, left-to-right on others. — end note ]
Although notes are non-normative, every implementation I'm aware of uses one of two layouts: either big-endian or little endian bit order.
Note that:
).For examples, look in netinet/tcp.h
and other nearby headers.
Edit by OP: for example tcp.h
defines
struct
{
u_int16_t th_sport; /* source port */
u_int16_t th_dport; /* destination port */
tcp_seq th_seq; /* sequence number */
tcp_seq th_ack; /* acknowledgement number */
# if __BYTE_ORDER == __LITTLE_ENDIAN
u_int8_t th_x2:4; /* (unused) */
u_int8_t th_off:4; /* data offset */
# endif
# if __BYTE_ORDER == __BIG_ENDIAN
u_int8_t th_off:4; /* data offset */
u_int8_t th_x2:4; /* (unused) */
# endif
// ...
}
And since it works with mainstream compilers, it means bitset's memory layout is reliable in practice.
Edit:
This is portable within one endianness:
struct Foo {
uint16_t x: 10;
uint16_t y: 6;
};
But this may not be because it straddles a 16-bit unit:
struct Foo {
uint16_t x: 10;
uint16_t y: 12;
uint16_t z: 10;
};
And this may not be because it has implicit padding:
struct Foo {
uint16_t x: 10;
};