If I have an C++ enum:
enum Foo
{
Bar,
Baz,
Bork,
};
How do I tell the compiler to use a uint16_t
to actually store the
With pre-C++2011 you can force a minimum storage by using a suitable range of values:
enum foo {
v0 = 0,
vmax = 32767
};
I think the compiler is free to choose either a sign or an unsigned integer type as the underlying type. The above range enforced that the representation uses at least short
as it underlying integer. Making it even one bigger may cause it to use long
instead. Of course, this only forces a minimum range and the compiler is free to choose a bigger range. Also, with the above definition you are not allowed to stray outside the range [0, 32767]
: if you really need a 16 bit range (at least) you need to use corresponding values).
You cannot do this in C++98/03. C++11 does allow you to do it, and without enum class
the way everyone else seems to keep telling you:
enum EnumType : uint16_t
{
Bar,
Baz,
Bork,
};
Again, you don't have to use enum class
. Not that it's a bad idea, but you don't have to.
Does GCC support this feature in its implementation of C++11?
Which version of GCC? It looks like GCC 4.4 added this functionality, but you should probably look to more recent versions, just for the sake of stability.
In C++11, you can do that:
enum class Foo : uint16_t
{
Bar,
Baz,
Bork,
};
Later on, you can also know the underlying type of the enum as:
#include <type_traits>
std::underlying_type<Foo>::type v = 10; //v is uint16_t
With c++11
you now have enum class, that allows you to set underlying type explicitly:
enum class Foo: uint16_t
{
Bar,
Baz,
Bork,
};