问题
There are many resources describing the architecture of NUMA from a hardware perspective and the performance implications of writing software that is NUMA-aware, but I have not yet found information regarding the how the mapping between virtual pages and physical frames is decided with respect to NUMA.
More specifically, the application running on modern Linux still sees a single contiguous virtual address space.
How can the application tell which parts of the address space are mapped onto local memory and which are mapped onto the memory of another NUMA node?
If the answer is that the application cannot tell, how does the OS decide when to map a virtual page to the physical memory of another NUMA node rather than the local physical memory?
回答1:
A quick answer is to have the program look at /proc/self/numa_maps. An example output looks like:
$ cat /proc/self/numa_maps # dumps current zsh numa_maps
55a4d27ff000 default file=/usr/bin/zsh mapped=177 N0=177 kernelpagesize_kB=4
55a4d2ab9000 default file=/usr/bin/zsh anon=2 dirty=2 N0=2 kernelpagesize_kB=4
55a4d2abb000 default file=/usr/bin/zsh anon=6 dirty=6 N0=4 N1=2 kernelpagesize_kB=4
55a4d2ac1000 default anon=9 dirty=9 N0=4 N1=5 kernelpagesize_kB=4
The first field shown on each line is the starting address that allows a correlation with output from /proc/pid/maps. Then N<node>=<pages>
tells you how many pages are allocated on each node.
So if you need to know the NUMA node number of a memory area, do a bisect search on the output of this file.
Some more information here:
https://www.kernel.org/doc/Documentation/vm/numa_memory_policy.txt https://lwn.net/Articles/486858/ Toward better NUMA scheduling
来源:https://stackoverflow.com/questions/36709872/how-is-numa-represented-in-virtual-memory