I\'ve seen a few questions and answers regarding to the endianness of structs, but they were about detecting the endianness of a system, or converting data between the two d
I am not sure if the following can be modified to suit your purposes, but where I work, we have found the following to be quite useful in many cases.
When endianness is important, we use two different data structures. One is done to represent how it expected to arrive. The other is how we want it to be represented in memory. Conversion routines are then developed to switch between the two.
The workflow operates thusly ...
We find this decoupling useful because (but not limited to) ...
Hopefully this decoupling can be useful to your application too.
Boost provides endian buffers for this.
For example:
#include <boost/endian/buffers.hpp>
#include <boost/static_assert.hpp>
using namespace boost::endian;
struct header {
big_int32_buf_t file_code;
big_int32_buf_t file_length;
little_int32_buf_t version;
little_int32_buf_t shape_type;
};
BOOST_STATIC_ASSERT(sizeof(h) == 16U);
Try using
#pragma scalar_storage_order big-endian
to store in big-endian-format
#pragma scalar_storage_order little-endian
to store in little-endian
#pragma scalar_storage_order default
to store it in your machines default endianness
Read more here
The way I usually handle this is like so:
#include <arpa/inet.h> // for ntohs() etc.
#include <stdint.h>
class be_uint16_t {
public:
be_uint16_t() : be_val_(0) {
}
// Transparently cast from uint16_t
be_uint16_t(const uint16_t &val) : be_val_(htons(val)) {
}
// Transparently cast to uint16_t
operator uint16_t() const {
return ntohs(be_val_);
}
private:
uint16_t be_val_;
} __attribute__((packed));
Similarly for be_uint32_t
.
Then you can define your struct like this:
struct be_fixed64_t {
be_uint32_t int_part;
be_uint32_t frac_part;
} __attribute__((packed));
The point is that the compiler will almost certainly lay out the fields in the order you write them, so all you are really worried about is big-endian integers. The be_uint16_t
object is a class that knows how to convert itself transparently between big-endian and machine-endian as required. Like this:
be_uint16_t x = 12;
x = x + 1; // Yes, this actually works
write(fd, &x, sizeof(x)); // writes 13 to file in big-endian form
In fact, if you compile that snippet with any reasonably good C++ compiler, you should find it emits a big-endian "13" as a constant.
With these objects, the in-memory representation is big-endian. So you can create arrays of them, put them in structures, etc. But when you go to operate on them, they magically cast to machine-endian. This is typically a single instruction on x86, so it is very efficient. There are a few contexts where you have to cast by hand:
be_uint16_t x = 37;
printf("x == %u\n", (unsigned)x); // Fails to compile without the cast
...but for most code, you can just use them as if they were built-in types.
There is a data representation for this called XDR. Have a look at it. http://en.wikipedia.org/wiki/External_Data_Representation
Though it might be a little too much for your Embedded System. Try searching for an already implemented library that you can use (check license restrictions!).
XDR is generally used in Network systems, since they need a way to move data in an Endianness independent way. Though nothing says that it cannot be used outside of networks.
A possible innovative solution would be to use a C interpreter like Ch and force the endian coding to big.