How do I convert between big-endian and little-endian values in C++?
EDIT: For clarity, I have to translate binary data (double-precision floating point values and 3
If you are doing this for purposes of network/host compatability you should use:
ntohl() //Network to Host byte order (Long)
htonl() //Host to Network byte order (Long)
ntohs() //Network to Host byte order (Short)
htons() //Host to Network byte order (Short)
If you are doing this for some other reason one of the byte_swap solutions presented here would work just fine.
If a big-endian 32-bit unsigned integer looks like 0xAABBCCDD which is equal to 2864434397, then that same 32-bit unsigned integer looks like 0xDDCCBBAA on a little-endian processor which is also equal to 2864434397.
If a big-endian 16-bit unsigned short looks like 0xAABB which is equal to 43707, then that same 16-bit unsigned short looks like 0xBBAA on a little-endian processor which is also equal to 43707.
Here are a couple of handy #define functions to swap bytes from little-endian to big-endian and vice-versa -->
// can be used for short, unsigned short, word, unsigned word (2-byte types)
#define BYTESWAP16(n) (((n&0xFF00)>>8)|((n&0x00FF)<<8))
// can be used for int or unsigned int or float (4-byte types)
#define BYTESWAP32(n) ((BYTESWAP16((n&0xFFFF0000)>>16))|((BYTESWAP16(n&0x0000FFFF))<<16))
// can be used for unsigned long long or double (8-byte types)
#define BYTESWAP64(n) ((BYTESWAP32((n&0xFFFFFFFF00000000)>>32))|((BYTESWAP32(n&0x00000000FFFFFFFF))<<32))
Just thought I added my own solution here since I haven't seen it anywhere. It's a small and portable C++ templated function and portable that only uses bit operations.
template<typename T> inline static T swapByteOrder(const T& val) {
int totalBytes = sizeof(val);
T swapped = (T) 0;
for (int i = 0; i < totalBytes; ++i) {
swapped |= (val >> (8*(totalBytes-i-1)) & 0xFF) << (8*i);
}
return swapped;
}
Here's a generalized version I came up with off the top of my head, for swapping a value in place. The other suggestions would be better if performance is a problem.
template<typename T>
void ByteSwap(T * p)
{
for (int i = 0; i < sizeof(T)/2; ++i)
std::swap(((char *)p)[i], ((char *)p)[sizeof(T)-1-i]);
}
Disclaimer: I haven't tried to compile this or test it yet.
From The Byte Order Fallacy by Rob Pike:
Let's say your data stream has a little-endian-encoded 32-bit integer. Here's how to extract it (assuming unsigned bytes):
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
If it's big-endian, here's how to extract it:
i = (data[3]<<0) | (data[2]<<8) | (data[1]<<16) | (data[0]<<24);
TL;DR: don't worry about your platform native order, all that counts is the byte order of the stream your are reading from, and you better hope it's well defined.
Note: it was remarked in the comment that absent explicit type conversion, it was important that data
be an array of unsigned char
or uint8_t
. Using signed char
or char
(if signed) will result in data[x]
being promoted to an integer and data[x] << 24
potentially shifting a 1 into the sign bit which is UB.
If you're using Visual C++ do the following: You include intrin.h and call the following functions:
For 16 bit numbers:
unsigned short _byteswap_ushort(unsigned short value);
For 32 bit numbers:
unsigned long _byteswap_ulong(unsigned long value);
For 64 bit numbers:
unsigned __int64 _byteswap_uint64(unsigned __int64 value);
8 bit numbers (chars) don't need to be converted.
Also these are only defined for unsigned values they work for signed integers as well.
For floats and doubles it's more difficult as with plain integers as these may or not may be in the host machines byte-order. You can get little-endian floats on big-endian machines and vice versa.
Other compilers have similar intrinsics as well.
In GCC for example you can directly call some builtins as documented here:
uint32_t __builtin_bswap32 (uint32_t x)
uint64_t __builtin_bswap64 (uint64_t x)
(no need to include something). Afaik bits.h declares the same function in a non gcc-centric way as well.
16 bit swap it's just a bit-rotate.
Calling the intrinsics instead of rolling your own gives you the best performance and code density btw..