How do I convert between big-endian and little-endian values in C++?
EDIT: For clarity, I have to translate binary data (double-precision floating point values and 3
Seems like the safe way would be to use htons on each word. So, if you have...
std::vector storage(n); // where n is the number to be converted
// the following would do the trick
std::transform(word_storage.cbegin(), word_storage.cend()
, word_storage.begin(), [](const uint16_t input)->uint16_t {
return htons(input); });
The above would be a no-op if you were on a big-endian system, so I would look for whatever your platform uses as a compile-time condition to decide whether htons is a no-op. It is O(n) after all. On a Mac, it would be something like ...
#if (__DARWIN_BYTE_ORDER != __DARWIN_BIG_ENDIAN)
std::transform(word_storage.cbegin(), word_storage.cend()
, word_storage.begin(), [](const uint16_t input)->uint16_t {
return htons(input); });
#endif