I have plenty of RAM, however, after starting and finishing a large number of processes, it seems that most of the applications\' virtual memory has been paged to disk, and
No, windows provides no such feature natively. Programs such as Cacheman and RAM IDLE accomplish this by simply allocating a large chunk of RAM, forcing other things to page to disk, which effectively accomplishes what you want.
Well, it isn't hard to implement yourself. Use VirtualQueryEx()
to discover the virtual addresses used by a process, ReadProcessMemory()
to force the pages to get reloaded.
It isn't likely to going to make any difference at all, it will just be your program that takes forever to do its job. The common diagnostic for slow reloading of pages is a fragmented paging file. Common on Windows XP for example when the disk hasn't been defragged in a long time and it was allowed to fill close to capacity frequently. The SysInternals' PageDefrag utility can help fix the problem.
Update 3: I've uploaded my complete program to github.
OK, based on the replies so far, here's a naive suggestion for a tool that tries to get all applications back into physical memory:
Suppose you have 2GB of RAM, and only 1GB is actually required by processes. If everything is in physical memory, you'd only copy 256 chunks, not the end of the world. At the end of the day, there's a good chance that all processes are now entirely in the physical memory.
Possible convenience and optimisation options:
I can iterate over all processes using EnumProcesses(); I'd be grateful for any suggestions how to copy an entire process's memory chunk-wise.
Update: Here is my sample function. It takes the process ID as its argument and copies one byte from each good page of the process. (The second argument is the maximal process memory size, obtainable via GetSystemInfo().)
void UnpageProcessByID(DWORD processID, LPVOID MaximumApplicationAddress, DWORD PageSize)
{
MEMORY_BASIC_INFORMATION meminfo;
LPVOID lpMem = NULL;
// Get a handle to the process.
HANDLE hProcess = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, processID);
// Do the work
if (NULL == hProcess )
{
fprintf(stderr, "Could not get process handle, skipping requested process ID %u.\n", processID);
}
else
{
SIZE_T nbytes;
unsigned char buf;
while (lpMem < MaximumApplicationAddress)
{
unsigned int stepsize = PageSize;
if (!VirtualQueryEx(hProcess, lpMem, &meminfo, sizeof(meminfo)))
{
fprintf(stderr, "Error during VirtualQueryEx(), skipping process ID (error code %u, PID %u).\n", GetLastError(), processID);
break;
}
if (meminfo.RegionSize < stepsize) stepsize = meminfo.RegionSize;
switch(meminfo.State)
{
case MEM_COMMIT:
// This next line should be disabled in the final code
fprintf(stderr, "Page at 0x%08X: Good, unpaging.\n", lpMem);
if (0 == ReadProcessMemory(hProcess, lpMem, (LPVOID)&buf, 1, &nbytes))
fprintf(stderr, "Failed to read one byte from 0x%X, error %u (%u bytes read).\n", lpMem, GetLastError(), nbytes);
else
// This next line should be disabled in the final code
fprintf(stderr, "Read %u byte(s) successfully from 0x%X (byte was: 0x%X).\n", nbytes, lpMem, buf);
break;
case MEM_FREE:
fprintf(stderr, "Page at 0x%08X: Free (unused), skipping.\n", lpMem);
stepsize = meminfo.RegionSize;
break;
case MEM_RESERVE:
fprintf(stderr, "Page at 0x%08X: Reserved, skipping.\n", lpMem);
stepsize = meminfo.RegionSize;
break;
default:
fprintf(stderr, "Page at 0x%08X: Unknown state, panic!\n", lpMem);
}
//lpMem = (LPVOID)((DWORD)meminfo.BaseAddress + (DWORD)meminfo.RegionSize);
lpMem += stepsize;
}
}
CloseHandle(hProcess);
}
Question: Does the region by whose size I increment consist of at most one page, or am I missing pages? Should I try to find out the page size as well and only increment by the minimum of region size and page size? Update 2: Page size is only 4kiB! I changed the above code to increment only in 4kiB steps. In the final code we'd get rid of the fprintf's inside the loop.