I'm trying to convert an integer to a string right now, and I'm having a problem.
I've gotten the code written and working for the most part, but it has a small flaw when carrying to the next place. It's hard to describe, so I'll give you an example. Using base 26 with a character set consisting of the lowercase alphabet:
0 = "a"
1 = "b"
2 = "c"
...
25 = "z"
26 = "ba" (This should equal "aa")
It seems to skip the character at the zero place in the character set in certain situations.
The thing that's confusing me is I see nothing wrong with my code. I've been working on this for too long now, and I still can't figure it out.
char* charset = (char*)"abcdefghijklmnopqrstuvwxyz";
int charsetLength = strlen(charset);
unsigned long long num = 5678; // Some random number, it doesn't matter
std::string key
do
{
unsigned int remainder = (num % charsetLength);
num /= charsetLength;
key.insert(key.begin(), charset[remainder]);
} while(num);
I have a feeling the function is tripping up over the modulo returning a zero, but I've been working on this so long, I can't figure out how it's happening. Any suggestions are welcome.
EDIT: The fact that the generated string is little endian is irrelevant for my application.
If I understand correctly what you want (the numbering used by excel for columns, A, B, .. Z, AA, AB, ...) this is a based notation able to represent numbers starting from 1. The 26 digits have values 1, 2, ... 26 and the base is 26. So A has value 1, Z value 26, AA value 27... Computing this representation is very similar to the normal reprentation you just need to adjust for the offset of 1 instead of 0.
#include <string>
#include <iostream>
#include <climits>
std::string base26(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
size_t const base = sizeof(digits) - 1;
char result[sizeof(unsigned long)*CHAR_BIT + 1];
char* current = result + sizeof(result);
*--current = '\0';
while (v != 0) {
v--;
*--current = digits[v % base];
v /= base;
}
return current;
}
// for testing
#include <cstdlib>
int main(int argc, char* argv[])
{
for (int i = 1; i < argc; ++i) {
unsigned long value = std::strtol(argv[i], 0, 0);
std::cout << value << " = " << base26(value) << '\n';
}
return 0;
}
Running with 1 2 26 27 52 53 676 677 702 703 gives
1 = A
2 = B
26 = Z
27 = AA
52 = AZ
53 = BA
676 = YZ
677 = ZA
702 = ZZ
703 = AAA
Your problem is that 'a' == 0.
In other words, 'aa' is not the answer, because that is really 00. 'ba' is the correct answer because b = '1', so that makes it 10 in base 26, which is 26 in decimal.
Your code is correct, you just seem to be misunderstanding it.
I think you should make a=1 and z=0 so you have abc...z just as in decimal 1234...90
Compare it to decimal system:9 is followed by 10 and not by 01!
To get Aprogrammers solution to compile on my system (I'm using gcc version 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) I needed to add headers;#include <climits> #include<cstdlib>
来源:https://stackoverflow.com/questions/2294443/base-conversion-problem