I tried to print Hello World
200,000 times and it took me forever, so I have to stop. But right after I add a char
array to act as a buffer, it took le
What compiler/platform are you using? I see no significant difference here (RedHat, gcc 4.1.2); both programs take 5-6 seconds to finish (but "user" time is about 150 ms). If I redirect output to a file (through the shell), total time is about 300 ms (so most of the 6 seconds is spent waiting for my console to catch up to the program).
In other words, output should be buffered by default, so I'm curious why you're seeing such a huge speedup.
3 tangentially-related notes:
i = 0
or end with i <= 200000
)printf
syntax with cout
syntax when outputting count...the fix for that is obvious enough.sync_with_stdio
produces a small speedup (about 5%) for me when outputting to console, but the impact is negligible when redirecting to file. This is a micro-optimization which you probably wouldn't need in most cases (IMHO).The cout function contains a lot of hidden and complex logic going all the way down the the kernel so you can write your text to the screen, when you use a buffer in that way your essentially do a batch request instead of repeating the complex I/O calls.
The main issue with writing to the disk is that the time taken to write is not a linear function of the number bytes, but an affine one with a huge constant.
In computing terms, it means that, for IO, you have a good throughput (less than memory, but quite good still), however you have poor latency (a tad better than network normally).
If you look at evaluation articles of HDD or SSD, you'll notice that the read/write tests are separated in two categories:
The latter is normally significantly greater than the former.
Normally, the OS and the IO library should abstract this for you, but as you noticed, if your routine is IO intensive, you might gain by increasing the buffer size. This is normal, the library is generally tailored for all kinds of uses and thus offers a good middle-ground for average applications. If your application is not "average", then it might not perform as fast as it could.
For the stand of file operations, writing to memory (RAM) is always faster than writing to the file on the disk directly.
For illustration, let's define:
Let's say we have to write some data to a file 100 times.
100 times x 1 ms = 100 ms
100 times x 5 ms = 500 ms
(100 times x 0.5 ms) + 1 ms = 51 ms
(100 times x 0.5 ms) + 5 ms = 55 ms
Buffering in memory is always faster than direct operation. However if your system is low on memory and has to swap with page file, it'll be slow again. Thus you have to balance your IO operations between memory and disk/network.
If you have a buffer, you get fewer actual I/O calls, which is the slow part. First, the buffer gets filled, then one I/O call is made to flush the buffer. Will be equally helpful in Java or any other system where I/O is slow.