How do I expand the hexadecimal number 0x1234 to 0x11223344 in a high-performance way?
unsigned int c = 0x1234, b;
b = (c & 0xff) << 4 | c & 0xf |
While others operate on hard-core optimization...
Take this as your best bet:
std::string toAARRGGBB(const std::string &argb)
{
std::string ret("0x");
int start = 2; //"0x####";
// ^^ skipped
for (int i = start;i < argb.length(); ++i)
{
ret += argb[i];
ret += argb[i];
}
return ret;
}
int main()
{
std::string argb = toAARRGGBB("0xACED"); //!!!
}
Haha
Assuming that, you want to always convert 0xWXYZ
to 0xWWXXYYZZ
, I believe that below solution would be little faster than the one you suggested:
unsigned int c = 0x1234;
unsigned int b = (c & 0xf) | ((c & 0xf0) << 4) |
((c & 0xf00) << 8) | ((c & 0xf000) << 12);
b |= (b << 4);
Notice that, one &
(and
) operation is saved from your solution. :-)
Demo.
This works and may be easier to understand, but bit manipulations are so cheap that I wouldn't worry much about efficiency.
#include <stdio.h>
#include <stdlib.h>
void main() {
unsigned int c = 0x1234, b;
b = (c & 0xf000) * 0x11000 + (c & 0x0f00) * 0x01100 +
(c & 0x00f0) * 0x00110 + (c & 0x000f) * 0x00011;
printf("%x -> %x\n", c, b);
}
If multiplication is cheap and 64-bit arithmetics is available, you could use this code:
uint64_t x = 0x1234;
x *= 0x0001000100010001ull;
x &= 0xF0000F0000F0000Full;
x *= 0x0000001001001001ull;
x &= 0xF0F0F0F000000000ull;
x = (x >> 36) * 0x11;
std::cout << std::hex << x << '\n';
In fact, it uses the same idea as the original attempt by AShelly.
I think that the lookup table approach suggested by Dimitri is a good choice, but I suggest to go one step further and generate the table in compile time; doing the work at compile time will obviously lessen the execution time.
First, we create a compile-time value, using any of the suggested methods:
constexpr unsigned int transform1(unsigned int x)
{
return ((x << 8) | x);
}
constexpr unsigned int transform2(unsigned int x)
{
return (((x & 0x00f000f0) << 4) | (x & 0x000f000f));
}
constexpr unsigned int transform3(unsigned int x)
{
return ((x << 4) | x);
}
constexpr unsigned int transform(unsigned int x)
{
return transform3(transform2(transform1(x)));
}
// Dimitri version, using constexprs
template <unsigned int argb> struct aarrggbb_dimitri
{
static const unsigned int value = transform(argb);
};
// Adam Liss version
template <unsigned int argb> struct aarrggbb_adamLiss
{
static const unsigned int value =
(argb & 0xf000) * 0x11000 +
(argb & 0x0f00) * 0x01100 +
(argb & 0x00f0) * 0x00110 +
(argb & 0x000f) * 0x00011;
};
And then, we create the compile-time lookup table with whatever method we have available, I'll wish to use the C++14 integer sequence but I don't know which compiler will the OP be using. So another possible approach would be to use a pretty ugly macro:
#define EXPAND16(x) aarrggbb<x + 0>::value, \
aarrggbb<x + 1>::value, \
aarrggbb<x + 2>::value, \
aarrggbb<x + 3>::value, \
aarrggbb<x + 4>::value, \
aarrggbb<x + 5>::value, \
aarrggbb<x + 6>::value, \
... and so on
#define EXPAND EXPAND16(0), \
EXPAND16(0x10), \
EXPAND16(0x20), \
EXPAND16(0x30), \
EXPAND16(0x40), \
... and so on
... and so on
See demo here.
PS: The Adam Liss approach could be used without C++11.
If you are going to do this for OpenGL, I suggest you to use a glTexImageXD
function with type
parameter set to GL_UNSIGNED_SHORT_4_4_4_4
. Your OpenGL driver should do the rest. And about the transparency inversion you can always manipulate blending via the glBlendFunc
and glBlendEquation
functions.