问题
I used the code below to measure the performance difference between reading large, sequential reads of a memory-mapped file, as compared to just calling ReadFile
:
HANDLE hFile = CreateFile(_T("D:\\LARGE_ENOUGH_FILE"),
FILE_READ_DATA, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,
FILE_FLAG_NO_BUFFERING, NULL);
__try
{
const size_t TO_READ = 32 * 1024 * 1024;
char sum = 0;
#if TEST_READ_FILE
DWORD start = GetTickCount();
char* p = (char*)malloc(TO_READ);
DWORD nw;
ReadFile(hFile, p, TO_READ, &nw, NULL);
#else
HANDLE hMapping = CreateFileMapping(hFile, NULL, PAGE_READONLY,
0, 0, NULL);
const char* const p = (const char*)MapViewOfFile(hMapping,
FILE_MAP_READ, 0, 0, 0);
DWORD start = GetTickCount();
#endif
for (size_t i = 0; i < TO_READ; i++)
{
sum += p[i]; // Do something kind of trivial...
}
DWORD end = GetTickCount();
_tprintf(_T("Elapsed: %u"), end - start);
}
__finally { CloseHandle(hFile); }
(I just changed the value of TEST_READ_FILE
to change the test.)
To my surprise, ReadFile
was slower by ~20%! Why?
回答1:
FILE_FLAG_NO_BUFFERING
cripples ReadFile
. The memory-mapped file is free to use whatever read-ahead algorithm it wants, and you've forbidden ReadFile
to do the same. You've turned off caching only in the ReadFile
version. Memory-mapped files can't work without file cache.
来源:https://stackoverflow.com/questions/5257019/memory-mapped-file-is-faster-on-huge-sequential-read-why