XmlDocument.Load fails, LoadXml works:

江枫思渺然 提交于 2019-11-30 13:26:31

The WebClient uses the encoding information in the headers of the HTTP response to determine the correct encoding (in this case ISO-8859-1 which is ASCII based, i.e. 8 bits per character)

It looks like XmlDocument.Load doesn't use this information and as the encoding is also missing from the xml declaration it has to guess at an encoding and gets it wrong. Some digging around leads me to believe that it chooses UTF-8.

If we want to get really technical the character it throws up on is "à", which is 0xE0 in the ISO-8859-1 encoding, but this isn't a valid character in UTF-8 - specifically the binary representation of this character is:

11100000

If you have a dig around in the UTF-8 Wikipedia article we can see that this indicates a code point (i.e. character) consisting of a total of 3 bytes that take the following format:

Byte 1      Byte 2      Byte 3
----------- ----------- -----------
1110xxxx    10xxxxxx    10xxxxxx

But if we have a look back at the document the next two characters are ": " which is 0x3A and 0x20 in ISO-8859-1. This means what we actually end up with is:

Byte 1      Byte 2      Byte 3
----------- ----------- -----------
11100000    00111010    00100000

Neither the 2nd or 3rd bytes of the sequence have 10 as the two most significant bits (which would indicate a continuation), and so this character makes no sense in UTF-8.

Umidità string as Node innertext must be inside < ! [ CDATA [ Umidità ] ] > this wont give any error in XmlDocument.Load.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!