I need to store a 128 bits long UUID in a variable. Is there a 128-bit datatype in C++? I do not need arithmetic operations, I just want to easily store and read the value v
There is no 128-bit integer in Visual-C++ because the Microsoft calling convention only allows returning of 2 32-bit values in the RAX:EAX pair. The presents a constant headache because when you multiply two integers together with the result is a two-word integer. Most load-and-store machines support working with two CPU word-sized integers but working with 4 requires software hack, so a 32-bit CPU cannot process 128-bit integers and 8-bit and 16-bit CPUs can't do 64-bit integers without a rather costly software hack. 64-bit CPUs can and regularly do work with 128-bit because if you multiply two 64-bit integers you get a 128-bit integer so GCC version 4.6 does support 128-bit integers. This presents a problem with writing portable code because you have to do an ugly hack where you return one 64-bit word in the return register and you pass the other in using a reference. For example, in order to print a floating-point number fast with Grisu we use 128-bit unsigned multiplication as follows:
#include <cstdint>
#if defined(_MSC_VER) && defined(_M_AMD64)
#define USING_VISUAL_CPP_X64 1
#include <intrin.h>
#include <intrin0.h>
#pragma intrinsic(_umul128)
#elif (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))
#define USING_GCC 1
#if defined(__x86_64__)
#define COMPILER_SUPPORTS_128_BIT_INTEGERS 1
#endif
#endif
#if USING_VISUAL_CPP_X64
UI8 h;
UI8 l = _umul128(f, rhs_f, &h);
if (l & (UI8(1) << 63)) // rounding
h++;
return TBinary(h, e + rhs_e + 64);
#elif USING_GCC
UIH p = static_cast<UIH>(f) * static_cast<UIH>(rhs_f);
UI8 h = p >> 64;
UI8 l = static_cast<UI8>(p);
if (l & (UI8(1) << 63)) // rounding
h++;
return TBinary(h, e + rhs_e + 64);
#else
const UI8 M32 = 0xFFFFFFFF;
const UI8 a = f >> 32;
const UI8 b = f & M32;
const UI8 c = rhs_f >> 32;
const UI8 d = rhs_f & M32;
const UI8 ac = a * c;
const UI8 bc = b * c;
const UI8 ad = a * d;
const UI8 bd = b * d;
UI8 tmp = (bd >> 32) + (ad & M32) + (bc & M32);
tmp += 1U << 31; /// mult_round
return TBinary(ac + (ad >> 32) + (bc >> 32) + (tmp >> 32), e + rhs_e + 64);
#endif
}
I would recommend using std::bitset<128>
(you can always do something like using UUID = std::bitset<128>;
). It will probably have a similar memory layout to the custom struct proposed in the other answers, but you won't need to define your own comparison operators, hash etc.
use the TBigInteger template and set any bit range in the template array like this TBigInt<128,true> for being a signed 128 bit integer or TBigInt<128,false> for being an unsigned 128 bit integer. Hope that helps maybe a late reply and someone else found this method already.
GCC and Clang support __int128
Although GCC does provide __int128
, it is supported only for targets (processors) which have an integer mode wide enough to hold 128 bits. On a given system, sizeof() intmax_t
and uintmax_t
determine the maximum value that the compiler and the platform support.
Checkout boost's implementation:
#include <boost/multiprecision/cpp_int.hpp>
using namespace boost::multiprecision;
int128_t v = 1;
This is better than strings and arrays, especially if you need to do arithmetic operations with it.