I have the following (simplified) code which I\'d like to optimise for speed:
long inputLen = 50000000; // 50 million
DataTable dataTable = new DataTable();
Dat
Instead of holding a huge data table in memory, I would suggest implementing a IDataReader
which serves up the data as the bulk copy goes. This will reduce the need to keep everything in memory upfront, and should thus serve to improve performance.
You should not construct entire datatable in memory. Use this overload of WrtieToServer, that takes array of DataRow. Just split in chunks your data.
For such a big table, you should instead use the
public void WriteToServer(IDataReader reader)
method.
It may mean you'll have to implement yourself a "fake" IDataReader
interface with your code (if you' don't get the data from an existing IDataReader
), but this way, you'll get "streaming" from end to end, and will avoid a 200 million loop.