I have a dual socket Xeon E5522 2.26GHZ machine (with hyperthreading disabled) running ubuntu server on linux kernel 3.0 supporting NUMA. The architecture layout is 4 physical c
The current OpenMP standard defines a boolean environment variable OMP_PROC_BIND
that controlls binding of OpenMP threads. If set to true
, e.g.
shell$ OMP_PROC_BIND=true OMP_NUM_THREADS=12 ./app.x
then the OpenMP execution environment should not move threads between processors. Unfortunately nothing more is said about how those threads should be bound and that's what a special working group in the OpenMP language comittee is addressing right now. OpenMP 4.0 will come with new envrionment variables and clauses that will allow one to specify how to distribute the threads. Of course, many OpenMP implementations offer their own non-standard methods to control binding.
Still most OpenMP runtimes are not NUMA aware. They will happily dispatch threads to any available CPU and you would have to make sure that each thread only access data that belongs to it. There are some general hints in this direction:
dynamic
scheduling for parallel for
(C/C++) / DO
(Fortran) loops.for
loops with the same team size and the same number of iteration chunks, with static
scheduling chunk 0 of both loops will be executed by thread 0, chunk 1 - by thread 1, and so on.Some colleagues of mine have thoroughly evaluated the NUMA behavious of different OpenMP runtimes and have specifically looked into the NUMA awareness of the Intel's implementation, but the articles are not published yet so I cannot provide you with a link.
There is one research project, called ForestGOMP, which aims at providing a NUMA-aware drop-in replacement for libgomp
. May be you should give it a look.
You can also check you make your memory placement and access in the right way with a new tool to profile NUMA applications and now open-source for Linux : NUMAPROF : https://memtt.github.io/numaprof/.