at section 5.5 of the PNG Specification, it discusses this concept in the PNG file format called \"CRC\" or \"Cyclic Redundancy Code\". I\'ve never heard of it before, so I\
The spec includes a link to example code:
https://www.w3.org/TR/2003/REC-PNG-20031110/#D-CRCAppendix
The spec has errors or is confusing.
That should be "data from each byte is processed from the least significant bit(0) to the most significant bit bit(7).
The CRC is a 33 term polynomial, where each term has a one bit coefficient, 0 or 1, with the 0 coefficients ignored when describing the polynomial.
Think of the CRC as being held in a 32 bit register. The sequence is to xor a byte of data into the right most byte of the CRC register, bits 7 through 0 (which technically correspond to the polynomial coefficients of x^24 to x^31). Then the CRC is "cycled" to the right for 8 bits (via table lookup). Once all data bytes have gone through this cycle, based on the comment from Mark Adler, it's the CRC is appended to data most significant byte first, (CRC>>24)&0xff, (CRC>>16)&0xff, (CRC>>8)&0xff, (CRC)&0xff.
The wiki article may help. For the example in the computation section, the dividend would be an array of data bytes with the bits of each byte reversed, the bits of the 33 bit polynomial would be non-reversed (0x104C11DB7). After doing the computation, the bits of the remainder would be reversed and appended to the data bytes.
https://en.wikipedia.org/wiki/Cyclic_redundancy_check
Mark Adler's answer includes a link to a good tutorial for a CRC. His answer also explains the x's used in a polynomial. It's just like a polynomial in algebra, except the coefficients can only be 0 or 1, and addition (or subtraction) is done using XOR.
what is x
From the wiki example:
data = 11010011101100 = x^13 + x^12 + x^10 + x^7 + x^6 + x^5 + x^3 + x^2
divisor = 1011 = x^3 + x + 1
Three 0 bits are appended to the data, effectively multiplying it by x^3:
dividend = 11010011101100000 = x^16 + x^15 + x^13 + x^10 + x^9 + x^8 + x^6 + x^5
Then the crc = dividend % divisor, with coefficients restricted to 0 or 1.
(x^16 + x^15 + x^13 + x^10 + x^9 + x^8 + x^6 + x^5) % (x^3 + x + 1) = x^2
11010011101100000 % 1011 = 100
I would recommend reading Ross Williams' classic "A Painless Guide to CRC Error Detection Algorithms". Therein you will find in-depth explanations and examples.
The polynomial is simply a different way to interpret a string of bits. When you have n bits in a register, they are most commonly interpreted either as just that, a list of n independent bits, or they are interpreted as an integer, where you multiply each bit by two raised to the powers 0 to n-1 and add them up. The polynomial representation is where you instead interpret each bit as the coefficient of a polynomial. Since a bit can only be a 0 or a 1, the resulting polynomials never actually show the 0 or 1. Instead the xn term is either there or not. So the four bits 1011
can be interpreted to be 1 x3 + 0 x2 + 1 x1 + 1 x0 = x3 + x + 1. Note that I made the choice that the most significant bit was the coefficient of the x3 term. That is an arbitrary choice, where I could have chosen the other direction.
As for what x is, it is simply a placeholder for the coefficient and the power of x. You never set x to some value, nor determine anything about x. What it does is allow you to operate on those bit strings as polynomials. When doing operations on these polynomials, you treat them just like the polynomials you had in algebra class, except that the coefficients are constrained to the field GF(2), where the coefficients can only be 0 or 1. Multiplication becomes the and operation, and addition becomes the exclusive-or operation. So 1 plus 1 is 0. You get a new and different way to add, multiply, and divide strings of bits. That different way is key to many error detection and correction schemes.
It is interesting, but ultimately irrelevant, that if you set x to 2 in the polynomial representation of a string of bits (with the proper ordering choice), you get the integer interpretation of that string of bits.
Beware: If you use (00000000)_2 and (00000001)_2 as the binary representations of the 0s and 1s in your example IDAT chunk, you will compute the CRC incorrectly. The ASCII values of '0' and '1' are 48 = (00110000)_2 and 49 = (00110001)_2; similarly, the ASCII values of 'I', 'D', 'A', and 'T' are 73 = (01001001)_2, 68 = (01000100)_2, 65 = (01000001)_2, and 84 = (01010100)_2. So, assuming you meant the values 0 and 1 rather than the characters ‘0’ and ‘1’, what you must compute the CRC of is (01001001 01000100 01000001 01010100 00000000 00000001 00000000 00000001 00000000 00000001 00000000 00000001 00000000 00000001 00000000)_2.
Inconsequentially to the CRC but consequentially to the validity of the chunk, the length field (i.e., the first 4 bytes) of the chunk should contain the length in bytes of the data only, which is 11, which is the ASCII value of a vertical tab (VT), which is a nonprinting character but can be represented in strings by the hexadecimal escape sequence \x0B (in which (B)_16 = 11). Similarly, the first 3 bytes must contain the character for which the ASCII value is 0 (rather than 48), which is null (NUL), which can be represented in strings by the hexadecimal escape sequence \x00. So, the length field must contain something like "\x00\x00\x00\x0B".