I know, I know this has been done to death; Im just posting a question to see if this solution is still relevant since now we have .NET 4 and newer
This link explain
The answer to this question hasn't changed in .NET 4 - for best performance you should still be using XmlReader as it streams the document instead of loading the full thing into memory.
The code you refer to uses XmlReader
for the actual querying so should be reasonably quick on large documents.
The best way to do this is read it line by line using XmlReader.Create.
var reader = XmlReader.Create(filename);
reader.WhitespaceHandling = WhitespaceHandling.None;
while (reader.Read())
{
// your code here.
}
I was struggling with the same issue from last few days. I just right click on project properties then navigated to Build tab and select option Any CPU, tick uncheck option Prefer 32 Bit and save it before to run your app, it helped me. I have attached snapshot of the same.
If it seems like this:
<root>
<item>...</item>
<item>...</item>
...
</root>
you can read file with XmlReader
and each 'item' open with XmlDocument
like this:
reader.ReadToDescendant("root");
reader.ReadToDescendant("item");
do
{
XmlDocument doc = new XmlDocument();
doc.LoadXml(reader.ReadOuterXml());
XmlNode item = doc.DocumentElement;
// do your work with `item`
}
while (reader.ReadToNextSibling("item"));
reader.Close();
In this case, you have no limits on file size.