How to write super-fast file-streaming code in C#?

后端 未结 9 1858
遥遥无期
遥遥无期 2020-11-28 19:11

I have to split a huge file into many smaller files. Each of the destination files is defined by an offset and length as the number of bytes. I\'m using the following code:<

相关标签:
9条回答
  • 2020-11-28 19:46

    No one suggests threading? Writing the smaller files looks like text book example of where threads are useful. Set up a bunch of threads to create the smaller files. this way, you can create them all in parallel and you don't need to wait for each one to finish. My assumption is that creating the files(disk operation) will take WAY longer than splitting up the data. and of course you should verify first that a sequential approach is not adequate.

    0 讨论(0)
  • 2020-11-28 19:47

    I don't believe there's anything within .NET to allow copying a section of a file without buffering it in memory. However, it strikes me that this is inefficient anyway, as it needs to open the input file and seek many times. If you're just splitting up the file, why not open the input file once, and then just write something like:

    public static void CopySection(Stream input, string targetFile, int length)
    {
        byte[] buffer = new byte[8192];
    
        using (Stream output = File.OpenWrite(targetFile))
        {
            int bytesRead = 1;
            // This will finish silently if we couldn't read "length" bytes.
            // An alternative would be to throw an exception
            while (length > 0 && bytesRead > 0)
            {
                bytesRead = input.Read(buffer, 0, Math.Min(length, buffer.Length));
                output.Write(buffer, 0, bytesRead);
                length -= bytesRead;
            }
        }
    }
    

    This has a minor inefficiency in creating a buffer on each invocation - you might want to create the buffer once and pass that into the method as well:

    public static void CopySection(Stream input, string targetFile,
                                   int length, byte[] buffer)
    {
        using (Stream output = File.OpenWrite(targetFile))
        {
            int bytesRead = 1;
            // This will finish silently if we couldn't read "length" bytes.
            // An alternative would be to throw an exception
            while (length > 0 && bytesRead > 0)
            {
                bytesRead = input.Read(buffer, 0, Math.Min(length, buffer.Length));
                output.Write(buffer, 0, bytesRead);
                length -= bytesRead;
            }
        }
    }
    

    Note that this also closes the output stream (due to the using statement) which your original code didn't.

    The important point is that this will use the operating system file buffering more efficiently, because you reuse the same input stream, instead of reopening the file at the beginning and then seeking.

    I think it'll be significantly faster, but obviously you'll need to try it to see...

    This assumes contiguous chunks, of course. If you need to skip bits of the file, you can do that from outside the method. Also, if you're writing very small files, you may want to optimise for that situation too - the easiest way to do that would probably be to introduce a BufferedStream wrapping the input stream.

    0 讨论(0)
  • 2020-11-28 19:47

    The fastest way to do file I/O from C# is to use the Windows ReadFile and WriteFile functions. I have written a C# class that encapsulates this capability as well as a benchmarking program that looks at differnet I/O methods, including BinaryReader and BinaryWriter. See my blog post at:

    http://designingefficientsoftware.wordpress.com/2011/03/03/efficient-file-io-from-csharp/

    0 讨论(0)
  • 2020-11-28 19:59

    You shouldn't re-open the source file each time you do a copy, better open it once and pass the resulting BinaryReader to the copy function. Also, it might help if you order your seeks, so you don't make big jumps inside the file.

    If the lengths aren't too big, you can also try to group several copy calls by grouping offsets that are near to each other and reading the whole block you need for them, for example:

    offset = 1234, length = 34
    offset = 1300, length = 40
    offset = 1350, length = 1000
    

    can be grouped to one read:

    offset = 1234, length = 1074
    

    Then you only have to "seek" in your buffer and can write the three new files from there without having to read again.

    0 讨论(0)
  • 2020-11-28 20:01

    How large is length? You may do better to re-use a fixed sized (moderately large, but not obscene) buffer, and forget BinaryReader... just use Stream.Read and Stream.Write.

    (edit) something like:

    private static void copy(string srcFile, string dstFile, int offset,
         int length, byte[] buffer)
    {
        using(Stream inStream = File.OpenRead(srcFile))
        using (Stream outStream = File.OpenWrite(dstFile))
        {
            inStream.Seek(offset, SeekOrigin.Begin);
            int bufferLength = buffer.Length, bytesRead;
            while (length > bufferLength &&
                (bytesRead = inStream.Read(buffer, 0, bufferLength)) > 0)
            {
                outStream.Write(buffer, 0, bytesRead);
                length -= bytesRead;
            }
            while (length > 0 &&
                (bytesRead = inStream.Read(buffer, 0, length)) > 0)
            {
                outStream.Write(buffer, 0, bytesRead);
                length -= bytesRead;
            }
        }        
    }
    
    0 讨论(0)
  • 2020-11-28 20:01

    Using FileStream + StreamWriter I know it's possible to create massive files in little time (less than 1 min 30 seconds). I generate three files totaling 700+ megabytes from one file using that technique.

    Your primary problem with the code you're using is that you are opening a file every time. That is creating file I/O overhead.

    If you knew the names of the files you would be generating ahead of time, you could extract the File.OpenWrite into a separate method; it will increase the speed. Without seeing the code that determines how you are splitting the files, I don't think you can get much faster.

    0 讨论(0)
提交回复
热议问题