How do I convert between big-endian and little-endian values in C++?
EDIT: For clarity, I have to translate binary data (double-precision floating point values and 3
From The Byte Order Fallacy by Rob Pike:
Let's say your data stream has a little-endian-encoded 32-bit integer. Here's how to extract it (assuming unsigned bytes):
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
If it's big-endian, here's how to extract it:
i = (data[3]<<0) | (data[2]<<8) | (data[1]<<16) | (data[0]<<24);
TL;DR: don't worry about your platform native order, all that counts is the byte order of the stream your are reading from, and you better hope it's well defined.
Note: it was remarked in the comment that absent explicit type conversion, it was important that data
be an array of unsigned char
or uint8_t
. Using signed char
or char
(if signed) will result in data[x]
being promoted to an integer and data[x] << 24
potentially shifting a 1 into the sign bit which is UB.