I know you can use this table to convert decimal to BCD:
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 01
Would something like this work for your conversion?
#include <string>
#include <bitset>
using namespace std;
string dec_to_bin(unsigned long n)
{
return bitset<numeric_limits<unsigned long>::digits>(n).to_string<char, char_traits<char>, allocator<char> >();
}
Usually when someone says they want to convert from decimal to BCD, they're talking about more than one decimal digit.
BCD is often packed into two decimal digits per byte (because 0..9 fit in 4 bits, as you've shown), but I think it's more natural to use an array of bytes, one per decimal digit.
An n-bit unsigned binary number will fit into ceil(n*log_2(10)) = ceil(n/log10(2)) decimal digits. It will also fit in ceil(n/3) = floor((n+2)/3)) decimal digits, since 2^3=8 is less than 10.
With that in mind, here's how I'd get the decimal digits of an unsigned int:
#include <algorithm>
#include <vector>
template <class Uint>
std::vector<unsigned char> bcd(Uint x) {
std::vector<unsigned char> ret;
if (x==0) ret.push_back(0);
// skip the above line if you don't mind an empty vector for "0"
while(x>0) {
Uint d=x/10;
ret.push_back(x-(d*10)); // may be faster than x%10
x=d;
}
std::reverse(ret.begin(),ret.end());
// skip the above line if you don't mind that ret[0] is the least significant digit
return ret;
}
Of course, if you know the width of your int type, you may prefer fixed length arrays. There's also no reason to reverse at all if you can remember the fact that the 0th digit is the least significant and reverse only on input/output. Keeping the least significant digit as the first simplifies digit-wise arithmetic ops in the case that you don't use a fixed number of digits.
If you want to represent "0" as the single "0" decimal digit rather than the empty digit-string (either is valid), then you'd check specifically for x==0.
Just simplified it.
#include <math.h>
#define uint unsigned int
uint Convert(uint value, const uint base1, const uint base2)
{
uint result = 0;
for (int i = 0; value > 0; i++)
{
result += value % base1 * pow(base2, i);
value /= base1;
}
return result;
}
uint FromBCD(uint value)
{
return Convert(value, 16, 10);
}
uint ToBCD(uint value)
{
return Convert(value, 10, 16);
}
If you want two decimal digits per byte, and "unsigned" is half the size of "unsigned long" (use uint32 and uint64 typedefs if you want):
unsigned long bcd(unsigned x) {
unsigned long ret=0;
while(x>0) {
unsigned d=x/10;
ret=(ret<<4)|(x-d*10);
x=d;
}
return ret;
}
This leaves you with the least significant (unit) decimal digit in the least significant half-byte. You can also execute the loop a fixed number (10 for uint32) of times, not stopping early when only 0 bits are left, which would allow the optimizer to unroll it, but that's slower if your numbers are often slow.
#include <stdint.h>
/* Standard iterative function to convert 16-bit integer to BCD */
uint32_t dec2bcd(uint16_t dec)
{
uint32_t result = 0;
int shift = 0;
while (dec)
{
result += (dec % 10) << shift;
dec = dec / 10;
shift += 4;
}
return result;
}
/* Recursive one liner because that's fun */
uint32_t dec2bcd_r(uint16_t dec)
{
return (dec) ? ((dec2bcd_r( dec / 10 ) << 4) + (dec % 10)) : 0;
}
This code encodes and decodes. Benchmarks are as follows.
I used an uint64_t to store the BCD here. Very convenient and fixed width, but not very space efficient for large tables. Pack the BCD digits, 2 to char[] for that.
// -------------------------------------------------------------------------------------
uint64_t uint32_to_bcd(uint32_t usi) {
uint64_t shift = 16;
uint64_t result = (usi % 10);
while (usi = (usi/10)) {
result += (usi % 10) * shift;
shift *= 16; // weirdly, it's not possible to left shift more than 32 bits
}
return result;
}
// ---------------------------------------------------------------------------------------
uint32_t bcd_to_ui32(uint64_t bcd) {
uint64_t mask = 0x000f;
uint64_t pwr = 1;
uint64_t i = (bcd & mask);
while (bcd = (bcd >> 4)) {
pwr *= 10;
i += (bcd & mask) * pwr;
}
return (uint32_t)i;
}
// --------------------------------------------------------------------------------------
const unsigned long LOOP_KNT = 3400000000; // set to clock frequencey of your CPU
// --------------------------------------------------------------------------------------
int main(void) {
time_t start = clock();
uint32_t foo, usi = 1234; //456;
uint64_t result;
unsigned long i;
printf("\nRunning benchmarks for %u loops.", LOOP_KNT);
start = clock();
for (uint32_t i = 0; i < LOOP_KNT; i++) {
foo = bcd_to_ui32(uint32_to_bcd(i >> 10));
}
printf("\nET for bcd_to_ui32(uint_16_to_bcd(t)) was %f milliseconds. foo %u", (double)clock() - start, foo);
printf("\n\nRunning benchmarks for %u loops.", LOOP_KNT);
start = clock();
for (uint32_t i = 0; i < LOOP_KNT; i++) {
foo = bcd_to_ui32(i >> 10);
}
printf("\nET for bcd_to_ui32(uint_16_to_bcd(t)) was %f milliseconds. foo %u", (double)clock() - start, foo);
getchar();
return 0;
}
NOTE: It appears that it's impossible, even with 64-bit ints, to shift left more than 32 bits, but fortunately, it's entirely possible to multiply by some factor of 16 - which happily has the desired effect. It's also much faster. Go figure.