I am pulling 1M+ records from an API. The pull works ok, but I\'m getting an out of memory exception when attempting to ReadToEnd
into a string variable.
Unfortunately, you didn't show your code but it sounds like the entire file is being loaded into memory. That's what you need to avoid.
Best if you can use a stream to process the file without loading the entire thing in memory.
It sounds like your file is too big for your environment. Loading the DOM for a large file can be problematic, especially when using the win32 platform (you haven't indicated whether this is the case).
You can combine the speed and memory efficiency of XmlReader with the convenience of XElement/Xnode, etc and use an XStreamingElement to save the transformed content after processing. This is much more memory-efficient for large files
Here's an example in pseudo-code:
// use a XStreamingElement for writing
var st = new XStreamingElement("root");
using(var xr = new XmlTextReader(stream))
{
while (xr.Read())
{
// whatever you're interested in
if (xr.NodeType == XmlNodeType.Element)
{
var node = XNode.ReadFrom(xr) as XElement;
if (node != null)
{
ProcessNode(node);
st.Add(node);
}
}
}
}
st.Save(outstream); // or st.WriteTo(xmlwriter);
XMLReader is the way to go when memory is an issue. It is also fastest.