Any portable code that uses bitfields seems to distinguish between little- and big-endian platforms. See the declaration of struct iphdr in linux kernel for an example of su
ISO/IEC 9899: 6.7.2.1 / 10
An implementation may allocate any addressable storage unit large enough to hold a bit-field. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
It is safer to use bit shift operations instead of making any assumptions on bit field ordering or alignment when trying to write portable code, regardless of system endianness or bitness.
Also see EXP11-C. Do not apply operators expecting one type to data of an incompatible type.
Just to point out - we've been discussing the issue of byte endianness, not bit endianness or endianness in bitfields, which crosses into the other issue:
If you are writing cross platform code, never just write out a struct as a binary object. Besides the endian byte issues described above, there can be all kinds of packing and formatting issues between compilers. The languages provide no restrictions on how a compiler may lay out structs or bitfields in actual memory, so when saving to disk, you must write each data member of a struct one at a time, preferably in a byte neutral way.
This packing impacts "bit endianness" in bitfields because different compilers might store the bitfields in a different direction, and the bit endianness impacts how they'd be extracted.
So bear in mind BOTH levels of the problem - the byte endianness impacts a computer's ability to read a single scalar value, e.g., a float, while the compiler (and build arguments) impact a program's ability to read in an aggregate structure.
What I have done in the past is to save and load a file in a neutral way and store meta-data about the way the data is laid out in memory. This allows me to use the "fast and easy" binary load path where compatible.
The bit fields will be stored in a different order depending on the endian-ness of the machine, this may not matter in some cases but in other it may matter. Say for example that your ParsedInt struct represented flags in a packet sent over a network, a little endian machine and big endian machine read those flags in a different order from the transmitted byte which is obviously a problem.
As far as I understand, bitfields are purely compiler constructs
And that's part of the problem. If the use of bit-fields was restricted to what the compiler 'owned', then how the compiler packed bits or ordered them would be of pretty much no concern to anyone.
However, bit-fields are probably used far more often to model constructs that are external to the compiler's domain - hardware registers, the 'wire' protocol for communications, or file format layout. These thing have strict requirements of how bits have to be laid out, and using bit-fields to model them means that you have to rely on implementation-defined and - even worse - the unspecified behavior of how the compiler will layout the bit-field.
In short, bit-fields are not specified well enough to make them useful for the situations they seem to be most commonly used for.
To echo the most salient points: If you are using this on a single compiler/HW platform as a software only construct, then endianness will not be an issue. If you are using code or data across multiple platforms OR need to match hardware bit layouts, then it IS an issue. And a lot of professional software is cross-platform, hence it has to care.
Here's the simplest example: I have code that stores numbers in binary format to disk. If I do not write and read this data to disk myself explicitly byte by byte, then it will not be the same value if read from an opposite endian system.
Concrete example:
int16_t s = 4096; // a signed 16-bit number...
Let's say my program ships with some data on the disk that I want to read in. Say I want to load it as 4096 in this case...
fread((void*)&s, 2, fp); // reading it from disk as binary...
Here I read it as a 16-bit value, not as explicit bytes. That means if my system matches the endianness stored on disk, I get 4096, and if it doesn't, I get 16 !!!!!
So the most common use of endianness is to bulk load binary numbers, and then do a bswap if you don't match. In the past, we'd store data on disk as big endian because Intel was the odd man out and provided high speed instructions to swap the bytes. Nowadays, Intel is so common that often make Little Endian the default and swap when on a big endian system.
A slower, but endian neutral approach is to do ALL I/O by bytes, i.e.:
uint_8 ubyte;
int_8 sbyte;
int16_t s; // read s in endian neutral way
// Let's choose little endian as our chosen byte order:
fread((void*)&ubyte, 1, fp); // Only read 1 byte at a time
fread((void*)&sbyte, 1, fp); // Only read 1 byte at a time
// Reconstruct s
s = ubyte | (sByte << 8);
Note that this is identical to the code you'd write to do an endian swap, but you no longer need to check the endianness. And you can use macros to make this less painful.
I used the example of stored data used by a program. The other main application mentioned is to write hardware registers, where those registers have an absolute ordering. One VERY COMMON place this comes up is with graphics. Get the endianness wrong and your red and blue color channels get reversed! Again, the issue is one of portability - you could simply adapt to a given hardware platform and graphics card, but if you want your same code to work on different machines, you must test.
Here's a classic test:
typedef union { uint_16 s; uint_8 b[2]; } EndianTest_t;
EndianTest_t test = 4096;
if (test.b[0] == 12) printf("Big Endian Detected!\n");
Note that bitfield issues exist as well but are orthogonal to endianness issues.
By the C standard, the compiler is free to store the bit field pretty much in any random way it wants. You can never make any assumptions of where the bits are allocated. Here are just a few bit-field related things that are not specified by the C standard:
Unspecified behavior
Implementation-defined behavior
Big/little endian is of course also implementation-defined. This means that your struct could be allocated in the following ways (assuming 16 bit ints):
PADDING : 8
f1 : 1
f2 : 3
f3 : 4
or
PADDING : 8
f3 : 4
f2 : 3
f1 : 1
or
f1 : 1
f2 : 3
f3 : 4
PADDING : 8
or
f3 : 4
f2 : 3
f1 : 1
PADDING : 8
Which one applies? Take a guess, or read in-depth backend documentation of your compiler. Add the complexity of 32-bit integers, in big- or little endian, to this. Then add the fact that the compiler is allowed to add any number of padding bytes anywhere inside your bit field, because it is treated as a struct (it can't add padding at the very beginning of the struct, but everywhere else).
And then I haven't even mentioned what happens if you use plain "int" as bit-field type = implementation-defined behavior, or if you use any other type than (unsigned) int = implementation-defined behavior.
So to answer the question, there is no such thing as portable bit-field code, because the C standard is extremely vague with how bit fields should be implemented. The only thing bit-fields can be trusted with is to be chunks of boolean values, where the programmer isn't concerned of the location of the bits in memory.
The only portable solution is to use the bit-wise operators instead of bit fields. The generated machine code will be exactly the same, but deterministic. Bit-wise operators are 100% portable on any C compiler for any system.