Please explain what this code is doing (someChar - 48)

前端 未结 4 1226
南旧
南旧 2021-01-05 08:14

I\'m going through some practice problems, and I saw this code:

#include 
#include 

int main(void) {
   char* s = \"357\";
           


        
4条回答
  •  一生所求
    2021-01-05 09:00

    The code basically sums the digits of a number represented as a string. It makes two important assumptions to work properly:

    • The string contains only chars in the '0'..'9' range
    • The character encoding used is ASCII

    In ASCII, '0' == 48, '1' == 49, and so on. Thus, '0' - 48 == 0, '1' - 48 == 1, and so on. That is, subtracting by 48 translates the char values '0'..'9' to the int values 0..9.

    Thus, precisely because '0' == 48, the code will also work with:

    sum += s[i] - '0';
    

    The intention is perhaps slightly more clear in this version.

    You can of course do the "reverse" mapping by addition, e.g. 5 + '0' == '5'. Similarly, if you have a char containing a letter in 'A'..'Z' range, you can "subtract" 'A' from it to get the index of that letter in the 0..25 range.

    See also

    • Wikipedia/Digit sum
    • Wikipedia/ASCII

    Related questions

    • How to convert a single char into an int
    • Language showdown: Convert string of digits to array of integers?
      • Many examples of this digit conversion, using subtraction with both '0' and 48!

    On alternative encodings

    As mentioned, the original - 48 code assumes that the character encoding used is ASCII. - '0' not only improves readability, but also waives the ASCII assumption, and will work with any encoding, as specified by the C language which stipulates that digit characters must be encoded sequentially in a contiguous block.

    On the other hand, no such stipulation is made about letters. Thus, in the rare situation where you're using EBCDIC encoding, for example, mapping 'A'..'Z' to 0..25 is no longer as simple as subtracting 'A', due to the fact that letters are NOT encoded sequentially in a contiguous block in EBCDIC.

    Some programming languages simplify matters by mandating one particular encoding is used to represent the source code (e.g. Java uses Unicode: JLS §3.1)

    See also

    • Wikipedia/Extended Binary Coded Decimal Interchange Code (EBCDIC)

    Related questions

    • Are digits represented in sequence in all text encodings?

提交回复
热议问题