Why is buffering in C++ important?

前端 未结 5 1053
离开以前
离开以前 2021-01-30 12:00

I tried to print Hello World 200,000 times and it took me forever, so I have to stop. But right after I add a char array to act as a buffer, it took le

相关标签:
5条回答
  • 2021-01-30 12:03

    What compiler/platform are you using? I see no significant difference here (RedHat, gcc 4.1.2); both programs take 5-6 seconds to finish (but "user" time is about 150 ms). If I redirect output to a file (through the shell), total time is about 300 ms (so most of the 6 seconds is spent waiting for my console to catch up to the program).

    In other words, output should be buffered by default, so I'm curious why you're seeing such a huge speedup.

    3 tangentially-related notes:

    1. Your program has an off-by-one error in that you only print 199999 times instead of the stated 200000 (either start with i = 0 or end with i <= 200000)
    2. You're mixing printf syntax with cout syntax when outputting count...the fix for that is obvious enough.
    3. Disabling sync_with_stdio produces a small speedup (about 5%) for me when outputting to console, but the impact is negligible when redirecting to file. This is a micro-optimization which you probably wouldn't need in most cases (IMHO).
    0 讨论(0)
  • 2021-01-30 12:05

    The cout function contains a lot of hidden and complex logic going all the way down the the kernel so you can write your text to the screen, when you use a buffer in that way your essentially do a batch request instead of repeating the complex I/O calls.

    0 讨论(0)
  • 2021-01-30 12:06

    The main issue with writing to the disk is that the time taken to write is not a linear function of the number bytes, but an affine one with a huge constant.

    In computing terms, it means that, for IO, you have a good throughput (less than memory, but quite good still), however you have poor latency (a tad better than network normally).

    If you look at evaluation articles of HDD or SSD, you'll notice that the read/write tests are separated in two categories:

    • throughput in random reads
    • throughput in contiguous reads

    The latter is normally significantly greater than the former.

    Normally, the OS and the IO library should abstract this for you, but as you noticed, if your routine is IO intensive, you might gain by increasing the buffer size. This is normal, the library is generally tailored for all kinds of uses and thus offers a good middle-ground for average applications. If your application is not "average", then it might not perform as fast as it could.

    0 讨论(0)
  • 2021-01-30 12:09

    For the stand of file operations, writing to memory (RAM) is always faster than writing to the file on the disk directly.

    For illustration, let's define:

    • each write IO operation to a file on the disk costs 1 ms
    • each write IO operation to a file on the disk over a network costs 5 ms
    • each write IO operation to the memory costs 0.5 ms

    Let's say we have to write some data to a file 100 times.

    Case 1: Directly Writing to File On Disk

    100 times x 1 ms = 100 ms
    

    Case 2: Directly Writing to File On Disk Over Network

    100 times x 5 ms = 500 ms
    

    Case 3: Buffering in Memory before Writing to File on Disk

    (100 times x 0.5 ms) + 1 ms = 51 ms
    

    Case 4: Buffering in Memory before Writing to File on Disk Over Network

    (100 times x 0.5 ms) + 5 ms = 55 ms
    

    Conclusion

    Buffering in memory is always faster than direct operation. However if your system is low on memory and has to swap with page file, it'll be slow again. Thus you have to balance your IO operations between memory and disk/network.

    0 讨论(0)
  • 2021-01-30 12:15

    If you have a buffer, you get fewer actual I/O calls, which is the slow part. First, the buffer gets filled, then one I/O call is made to flush the buffer. Will be equally helpful in Java or any other system where I/O is slow.

    0 讨论(0)
提交回复
热议问题