How do I read a large file from disk to database without running out of memory

白昼怎懂夜的黑 提交于 2020-01-11 06:18:09

问题


I feel embarrassed to ask this question as I feel like I should already know. However, given I don't....I want to know how to read large files from disk to a database without getting an OutOfMemory exception. Specifically, I need to load CSV (or really tab delimited files).

I am experimenting with CSVReader and specifically this code sample but I'm sure I'm doing it wrong. Some of their other coding samples show how you can read streaming files of any size, which is pretty much what I want (only I need to read from disk), but I don't know what type of IDataReader I could create to allow this.

I am reading directly from disk and my attempt to ensure I don't ever run out of memory by reading too much data at once is below. I can't help thinking that I should be able to use a BufferedFileReader or something similar where I can point to the location of the file and specify a buffer size and then CsvDataReader expects an IDataReader as it's first parameter, it could just use that. Please show me the error of my ways, let me be rid of my GetData method with it's arbitrary file chunking mechanism and help me out with this basic problem.

    private void button3_Click(object sender, EventArgs e)
    {   
        totalNumberOfLinesInFile = GetNumberOfRecordsInFile();
        totalNumberOfLinesProcessed = 0; 

        while (totalNumberOfLinesProcessed < totalNumberOfLinesInFile)
        {
            TextReader tr = GetData();
            using (CsvDataReader csvData = new CsvDataReader(tr, '\t'))
            {
                csvData.Settings.HasHeaders = false;
                csvData.Settings.SkipEmptyRecords = true;
                csvData.Settings.TrimWhitespace = true;

                for (int i = 0; i < 30; i++) // known number of columns for testing purposes
                {
                    csvData.Columns.Add("varchar");
                }

                using (SqlBulkCopy bulkCopy = new SqlBulkCopy(@"Data Source=XPDEVVM\XPDEV;Initial Catalog=MyTest;Integrated Security=SSPI;"))
                {
                    bulkCopy.DestinationTableName = "work.test";

                    for (int i = 0; i < 30; i++)
                    {
                        bulkCopy.ColumnMappings.Add(i, i); // map First to first_name
                    }

                    bulkCopy.WriteToServer(csvData);

                }
            }
        }
    }

    private TextReader GetData()
    {
        StringBuilder result = new StringBuilder();
        int totalDataLines = 0;
        using (FileStream fs = new FileStream(pathToFile, FileMode.Open, System.IO.FileAccess.Read, FileShare.ReadWrite))
        {
            using (StreamReader sr = new StreamReader(fs))
            {
                string line = string.Empty;
                while ((line = sr.ReadLine()) != null)
                {
                    if (line.StartsWith("D\t"))
                    {
                        totalDataLines++;
                        if (totalDataLines < 100000) // Arbitrary method of restricting how much data is read at once.
                        {
                            result.AppendLine(line);
                        }
                    }
                }
            }
        }
        totalNumberOfLinesProcessed += totalDataLines;
        return new StringReader(result.ToString());
    }

回答1:


Probably not the answer you're looking for but this is what BULK INSERT was designed for.




回答2:


Actually your code is reading all data from file and keep into TextReader(in memory). Then you read data from TextReader to Save server.

If data is so big, data size in TextReader caused out of memory. Please try this way.

1) Read data (each line) from File.

2) Then insert each line to Server.

Out of memory problem will be solved because only each record in memory while processing.

Pseudo code

begin tran

While (data = FilerReader.ReadLine())
{
  insert into Table[col0,col1,etc] values (data[0], data[1], etc)
}

end tran



回答3:


I would just add using BufferedFileReader with the readLine method and doing exatcly in the fashion above.

Basically understanding the resposnisbilties here.

BufferedFileReader is the class reading data from file (buffe wise) There should be a LineReader too. CSVReader is a util class for reading the data assuming that its in correct format.

SQlBulkCopy you are anywsay using.

Second Option

You can go to the import facility of database directly. If the format of the file is correct and thw hole point of program is this only. that would be faster too.




回答4:


I think you may have a red herring with the size of the data. Every time I come across this problem, it's not the size of the data but the amount of objects created when looping over the data.

Look in your while loop adding records to the db within the method button3_Click(object sender, EventArgs e):

TextReader tr = GetData();
using (CsvDataReader csvData = new CsvDataReader(tr, '\t'))

Here you declare and instantiate two objects each iteration - meaning for each chunk of file you read you will instantiate 200,000 objects; the garbage collector will not keep up.

Why not declare the objects outside of the while loop?

TextReader tr = null;
CsvDataReader csvData = null;

This way, the gc will stand half a chance. You could prove the difference by benchmarking the while loop, you will no doubt notice a huge performance degradation after you have created just a couple of thousand objects.




回答5:


pseudo code:

while (!EOF) {
   while (chosenRecords.size() < WRITE_BUFFER_LIST_SIZE) {
      MyRecord record = chooseOrSkipRecord(file.readln());
      if (record != null) {
         chosenRecords.add(record)
      }
   }  
   insertRecords(chosenRecords) // <== writes data and clears the list
}

WRITE_BUFFER_LIST_SIZE is just a constant that you set... bigger means bigger batches and smaller means smaller batches. A size of 1 is RBAR :).

If your operation is big enough that failing partway through is a realistic possibility, or if failing partway through could cost someone a non-trivial amount of money, you probably want to also write to a second table the total number of records processed so far from the file (including the ones you skipped) as part of the same transaction so that you can pick up where you left off in the event of partial completion.




回答6:


Instead of reading csv rows one by one and inserting into db one by one I suggest read a chunk and insert it into database. Repeat this process until the entire file has been read.

You can buffer in memory, say 1000 csv rows at a time, then insert them in the database.

int MAX_BUFFERED=1000;
int counter=0;
List<List<String>> bufferedRows= new ...

while (scanner.hasNext()){
  List<String> rowEntries= getData(scanner.getLine())
  bufferedRows.add(rowEntries);

  if (counter==MAX_BUFFERED){
    //INSERT INTO DATABASE
    //append all contents to a string buffer and create your SQL INSERT statement
    bufferedRows.clearAll();//remove data so it could be GCed when GC kicks in
  }
}


来源:https://stackoverflow.com/questions/8988195/how-do-i-read-a-large-file-from-disk-to-database-without-running-out-of-memory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!