It\'s easy to construct a bitset<64>
from a uint64_t
:
uint64_t flags = ...;
std::bitset<64> bs{flags};
<
std::bitset
has no range constructor, so you will have to loop, but setting every bit individually with std::bitset::set()
is underkill. std::bitset
has support for binary operators, so you can at least set 64 bits in bulk:
std::bitset<192> bs;
for(int i = 2; i >= 0; --i) {
bs <<= 64;
bs |= flags[i];
}
Update: In the comments, @icando raises the valid concern that bitshifts are O(N) operations for std::bitset
s. For very large bitsets, this will ultimately eat the performance boost of bulk processing. In my benchmarks, the break-even point for a std::bitset<N * 64>
in comparison to a simple loop that sets the bits individually and does not mutate the input data:
int pos = 0;
for(auto f : flags) {
for(int b = 0; b < 64; ++b) {
bs.set(pos++, f >> b & 1);
}
}
is somewhere around N == 200
(gcc 4.9 on x86-64 with libstdc++ and -O2
). Clang performs somewhat worse, breaking even around N == 160
. Gcc with -O3
pushes it up to N == 250
.
Taking the lower end, this means that if you want to work with bitsets of 10000 bits or larger, this approach may not be for you. On 32-bit platforms (such as common ARMs), the threshold will probably lie lower, so keep that in mind when you work with 5000-bit bitsets on such platforms. I would argue, however, that somewhere far before this point, you should have asked yourself if a bitset is really the right choice of container.
If initializing from range is important, you might consider using std::vector
It does have constructor from pair of iterators