overlapped-io

Explanation for tiny reads (overlapped, buffered) outperforming large contiguous reads?

☆樱花仙子☆ 提交于 2019-11-27 18:35:58
(apologies for the somewhat lengthy intro) During development of an application which prefaults an entire large file (>400MB) into the buffer cache for speeding up the actual run later, I tested whether reading 4MB at a time still had any noticeable benefits over reading only 1MB chunks at a time. Surprisingly, the smaller requests actually turned out to be faster. This seemed counter-intuitive, so I ran a more extensive test. The buffer cache was purged before running the tests (just for laughs, I did one run with the file in the buffers, too. The buffer cache delivers upwards of 2GB/s

Is there really no asynchronous block I/O on Linux?

主宰稳场 提交于 2019-11-27 09:46:44
问题 Consider an application that is CPU bound, but also has high-performance I/O requirements. I'm comparing Linux file I/O to Windows, and I can't see how epoll will help a Linux program at all. The kernel will tell me that the file descriptor is "ready for reading," but I still have to call blocking read() to get my data, and if I want to read megabytes, it's pretty clear that that will block. On Windows, I can create a file handle with OVERLAPPED set, and then use non-blocking I/O, and get