Continuing my experiments in C, I wanted to see how bit fields are placed in memory. I\'m working on Intel 64 bit machine. Here is my piece of code:
#include <
For some reason I do not fathom, the implementers of the C standard decided that specifying a numberic type along with a bitfield should allocate space sufficient to hold that numeric type unless the previous field was a bitfield, allocated out of the same type, which had enough space left over to handle the next field.
For your particular example, on a machine with 16 bit unsigned shorts, you should change the declarations in your bitfield to unsigned shorts. As it happens, unsigned char would also work, and yield the same results, but that is not always the case. If optimally-packed bitfields would straddle char boundaries but not short boundaries, then declaring bitfields as unsigned char
would require padding to avoid such straddling.
Although some processors would have no trouble generating code for bitfields which straddle storage-unit boundaries, the present C standard would forbid packing them that way (again, for reasons I do not fathom). On a machine with typical 8/16/32/64-bit data types, for example, a compiler could not allow a programmer to specify a 3-byte structure containing eight three-byte fields, since the fields would have to straddle byte boundaries. I could understand the spec not requiring compilers to handle fields that straddle boundaries, or requiring that bitfields be laid out in some particular fashion (I'd regard them as infinitely more useful if one could specify that a particular bitfield should e.g. use bits 4-7 of some location), but the present standard seems to give the worst of both worlds.
In any case, the only way to use bitfields efficiently is to figure out where storage unit boundaries are, and choose types for the bitfields suitably.
PS--It's interesting to note that while I recall compilers used to disallow volatile
declarations for structures containing bitfields (since the sequence of operations when writing a bitfield may not be well defined), under the new rules the semantics could be well defined (I don't know if the spec actually requires them). For example, given:
typedef struct {
uint64_t b0:8,b1:8,b2:8,b3:8, b4:8,b5:8,b6:8,b7:8;
uint64_t b8:8,b9:8,bA:8,bB:8, bC:8,bD:8,bE:8,bF:8;
} FOO;
extern volatile FOO bar;
the statement bar.b3 = 123;
will read the first 64 bits from bar
, and then write the first 64 bits of bar
with an updated value. If bar
were not volatile, a compiler might replace that sequence with a simple 8-bit write, but bar
could be something like a hardware register which can only be written in 32-bit or 64-bit chunks.
If I had my druthers, it would be possible to define bitfields using something like:
typedef struct {
uint32_t {
baudRate:13=0, dataFormat:3,
enableRxStartInt: 1=28, enableRxDoneInt: 1, enableTxReadyInt: 1, enableTxEmptyInt: 1;};
};
} UART_CONTROL;
indicating that baudRate is 13 bits starting at bit 0 (the LSB), dataFormat is 3 bits starting after baudRate, enableRxStartInt is bit 28, etc. Such a syntax would allow many types of data packing and unpacking to be written in portable fashion, and would allow many I/O register manipulations to be done in a compiler-agnostic fashion (though such code would obviously be hardware-specific).