Monitoring performance of Linux automatic memory migration between NUMA nodes

Non-Uniform Memory Access (NUMA) is used on larger computer system to allow the machine to have enough aggregate memory bandwidth for the many processors in the system. The downside of using NUMA is that placement of tasks and memory becomes a concern as a processor's access to memory on other NUMA nodes is slower than accessing memory on the local NUMA node. Processes may be migrated between nodes to improve processor utilization and memory may be allocated on nodes other than the node with the CPU a task is currently running on. The kernel's automatic NUMA balancing can migrate pages of memory to be closer to the processors using the memory. However, there is overhead associated with NUMA balancing and there are cases where the automatic memory migration could hurt performance (for example latency sensitive applications).

How Linux NUMA automatic page migration mechanism works

On modern machines there are two types of addresses: virtual and physical. The virtual addresses used by programs are mapped to physical memory located on various NUMA nodes in the machine. With a change of the mapping a virtual address can be addressing physical memory in a different NUMA node. To help determine whether a page of memory should be moved the kernel removes the virtual to physical mapping for some regions of memory (pages). When a program attempts to access a virtual address that is not mapped to physical memory the memory access triggers a page fault exception. The kernel then can determine which processor is attempting to access the page and whether the page is on the same NUMA node as the processor or on a remote NUMA node. If the memory page that the processor is attempting to access is on a remote NUMA node, the kernel can migrate the page from the remote NUMA node to the local NUMA node, causing later accesses from that CPU to that page to have lower latency.

Performance Co-Pilot (PCP) monitoring of page migrations

Performance Co-Pilot (PCP) allows access to the page metrics provided by the Linux kernel. The configurable ​pmrep​ tool available in PCP can be set up to monitor the page faults used to trigger the page migrations with the following configuration:

​Raw​

[numa-hint-faults]
header = yes
unitinfo = no
globals = no
timestamp = yes
width = 15
precision = 2
delimiter = " "
mem.vmstat.numa_hint_faults = faults/s,,,,
mem.vmstat.numa_hint_faults_local = faults_local/s,,,,
local = mem.vmstat.numa_hint_faults_local_percent
local.label = %%local
local.formula = 100 *
(rate(mem.vmstat.numa_hint_faults)
?
rate(mem.vmstat.numa_hint_faults_local)/rate(mem.vmstat.numa_hint_faults)
:
mkconst(1, type="double", semantics="instant") )
local.width = 7
faults_remote = mem.vmstat.numa_hint_faults_remote
faults_remote.formula = mem.vmstat.numa_hint_faults - mem.vmstat.numa_hint_faults_local
faults_remote.label = faults_remote/s
remote = mem.vmstat.numa_hint_faults_remote_percent
remote.formula = 100 *
(rate(mem.vmstat.numa_hint_faults)
?
(1 - rate(mem.vmstat.numa_hint_faults_local)/rate(mem.vmstat.numa_hint_faults))
:
mkconst(0, type="double", semantics="instant") )
remote.label = %%remote
remote.width = 7

Below is the output of the ​pmrep​ monitoring the hinting pages faults with the above ​pmrep​ configuration stored as ​numa-hint-faults.conf​. The first column is the local time. By default a new measurement is taken every second and a new row of output is produced. The second column is the rate of all hinting page faults. The third and fourth columns give the rate of local hinting page faults and the percentage of the total hinting faults are local. The local hinting faults are page faults where the processor that triggered the hinting page fault and the memory it referred to are on the same NUMA node. The last two columns are the number of remote hinting faults and the percentage of the hinting faults that are remote. The remote hinting faults are page faults where the processor that triggered the hinting page fault and the memory it referred to are on different NUMA nodes. For the example below the percentage of remote hinting page faults is very low, most have no remote hinting page faults and the highest percentage of remote hinting faults was 2.93%.

​Raw​

$ pmrep -c ~/numa-hint-faults.conf :numa-hint-faults
faults/s faults_local/s %local faults_remote/s %remote
12:36:02 N/A N/A N/A N/A N/A
12:36:03 48.88 48.88 100.00 0.00 0.00
12:36:04 18.00 18.00 100.00 0.00 0.00
12:36:05 20.01 20.01 100.00 0.00 0.00
12:36:06 2157.61 2155.61 99.91 2.00 0.09
12:36:07 340.97 330.97 97.07 10.00 2.93
12:36:08 184.03 184.03 100.00 0.00 0.00
12:36:09 401.18 401.18 100.00 0.00 0.00
12:36:10 42.98 42.98 100.00 0.00 0.00
12:36:11 360.99 360.99 100.00 0.00 0.00

When tasks get assigned to different nodes via ​taskset​ command while the task's associated memory have not been migrated the percentage of remote hinting page faults increase dramatically as seen in the example below.

​Raw​

$ pmrep -c ~/numa-hint-faults.conf :numa-hint-faults
faults/s faults_local/s %local faults_remote/s %remote
12:38:09 N/A N/A N/A N/A N/A
12:38:10 47.91 4.99 10.42 42.92 89.58
12:38:11 46.97 0.00 0.00 46.97 100.00
12:38:12 342.25 118.09 34.50 224.17 65.50
12:38:13 215.02 1.00 0.47 214.02 99.53
12:38:14 76.00 1.00 1.32 75.00 98.68
12:38:15 41.00 0.00 0.00 41.00 100.00
12:38:16 264.00 4.00 1.52 260.00 98.48
12:38:17 185.98 113.99 61.29 71.99 38.71
12:38:18 17.00 7.00 41.18 10.00 58.82

PCP can also monitor the actual migrations, the number of pages successfully migrated, the number of pages that migration was attempted on but failed, and estimate the average amount of bandwidth used per node. Below is a ​pmrep​ configuration to display this information.

​Raw​

[numa-pgmigrate-per-node]
header = yes
unitinfo = no
globals = no
timestamp = yes
width = 15
precision = 3
delimiter = " "
node_bw = mem.vmstat.numa_bandwidth
node_bw.label = MB/s/node
node_bw.formula = rate(mem.vmstat.numa_pages_migrated) *
hinv.pagesize/hinv.nnode/mkconst(1000000, type="double", semantics="instant")
node_pg = mem.vmstat.numa_pages
node_pg.label = auto pg/s/node
node_pg.formula = rate(mem.vmstat.numa_pages_migrated)/hinv.nnode
node_succ_pg = mem.vmstat.numa_pgmigrate_success
node_succ_pg.label = success/s/node
node_succ_pg.formula = rate(mem.vmstat.pgmigrate_success)/hinv.nnode
node_fail_pg = mem.vmstat.numa_pgmigrate_fail
node_fail_pg.label = fail/s/node
node_fail_pg.formula = rate(mem.vmstat.pgmigrate_fail)/hinv.nnode

Below is an example output showing that the automatic page migration is moving pages in an attempt to co-locate pages with the task using the pages. The first column is the local time. The second column is the estimate of the average bandwidth used per node for page migration. The third column is the average rate pages are automatically migrated for each node. The third and fourth column appear to be very similar, but the success/s/node also includes pages due to explicit page migrations from the ​migratepages​ command. The last column shows the number automatic and explicit page migrations that failed. Ideally, every page migration should be successful and the last column should be zero.

​Raw​

$ pmrep -c ~/numa-pgmigrates.conf :numa-pgmigrate-per-node
MB/s/node auto pg/s/node success/s/node fail/s/node
14:02:34 N/A N/A N/A N/A
14:02:35 0.004 0.997 0.997 0.000
14:02:36 0.004 1.000 1.000 0.000
14:02:37 0.014 3.503 3.503 0.000
14:02:38 0.000 0.000 0.000 0.000
14:02:39 2.115 516.261 516.261 0.000
14:02:40 0.008 2.000 2.000 0.000
14:02:41 0.131 32.000 32.000 0.000
14:02:42 0.008 2.002 2.002 0.000
14:02:43 0.006 1.500 1.500 0.000

When might automatic NUMA page migration be useful?

The automatic NUMA migration mechanism has a fairly simple algorithm to decided whether a page should be moved. It can help the case where a task is started on one node, allocates memory on that node, and then later the task is moved to another node. The expectation is that the task's threads and memory can fit in a single NUMA node and that the task can tolerate the latency created by the additional hinting page faults and the associated migration of page sized chunks of memory (4KB or 64KB).

When might automatic NUMA page migration not be helpful?

For latency-sensitive applications the additional page faults used to hint where to place pages of memory and the the movement of 4KB or 64KB chunks to migrate a page could add unacceptable delays. The system administrator can disable the page migrations with either:

​Raw​

# echo 0 > /proc/sys/kernel/numa_balancing

or adding the following to the kernel boot command:

​Raw​

numa_balancing=disable

The simple algorithm used to determine whether to migrate pages is not going to work so well for applications where the application using the pages requires more threads or more memory than what is available in a single NUMA node. Similarly, if the multiple threads assigned to different nodes frequently modify the pages they share, the pages may often migrate between the different nodes. This can be observed with continued high values in the ​MB/s/node​ and ​auto pg/s/node​ columns of ​pmrep​ using the ​:numa-pgmigrate-per-node​ configuration from ​numa-pgmigrates.conf​.

There also has to be some free memory available on the NUMA nodes for the automatic migration to work. If the machine does not have free memory on the node that hinting faults suggests that a page be moved to, the automatic migration cannot move the page there. Then the system has the overhead of the automatic page migration monitoring, but no benefit of the reduced average access time because improved page locality. This can be observed with continued zero or very low values in the ​success/s/node​ and high values in the ​fail/s/node​ columns of ​pmrep​ using the ​:numa-pgmigrate-per-node​ configuration from ​numa-pgmigrates.conf​.