In answering this question, I came across a situation that I don\'t understand. The OP was trying to load XML from the following location: http://www.google.com/ig/api?weath
The WebClient
uses the encoding information in the headers of the HTTP response to determine the correct encoding (in this case ISO-8859-1 which is ASCII based, i.e. 8 bits per character)
It looks like XmlDocument.Load
doesn't use this information and as the encoding is also missing from the xml declaration it has to guess at an encoding and gets it wrong. Some digging around leads me to believe that it chooses UTF-8.
If we want to get really technical the character it throws up on is "à", which is 0xE0 in the ISO-8859-1 encoding, but this isn't a valid character in UTF-8
- specifically the binary representation of this character is:
11100000
If you have a dig around in the UTF-8 Wikipedia article we can see that this indicates a code point (i.e. character) consisting of a total of 3 bytes that take the following format:
Byte 1 Byte 2 Byte 3
----------- ----------- -----------
1110xxxx 10xxxxxx 10xxxxxx
But if we have a look back at the document the next two characters are ": " which is 0x3A and 0x20 in ISO-8859-1. This means what we actually end up with is:
Byte 1 Byte 2 Byte 3
----------- ----------- -----------
11100000 00111010 00100000
Neither the 2nd or 3rd bytes of the sequence have 10
as the two most significant bits (which would indicate a continuation), and so this character makes no sense in UTF-8.
Umidità string as Node innertext must be inside < ! [ CDATA [ Umidità ] ] > this wont give any error in XmlDocument.Load.