From: Huang Ying <ying.huang(a)intel.com>
With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory). The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
usually different.
In such system, because of the memory accessing pattern changing etc,
some pages in the slow memory may become hot globally. So in this
patch, the NUMA balancing mechanism is enhanced to optimize the page
placement among the different memory types according to hot/cold
dynamically.
In a typical memory tiering system, there are CPUs, fast memory and
slow memory in each physical NUMA node. The CPUs and the fast memory
will be put in one logical node (called fast memory node), while the
slow memory will be put in another (faked) logical node (called slow
memory node). That is, the fast memory is regarded as local while the
slow memory is regarded as remote. So it's possible for the recently
accessed pages in the slow memory node to be promoted to the fast
memory node via the existing NUMA balancing mechanism.
The original NUMA balancing mechanism will stop to migrate pages if the free
memory of the target node will become below the high watermark. This
is a reasonable policy if there's only one memory type. But this
makes the original NUMA balancing mechanism almost not work to optimize page
placement among different memory types. Details are as follows.
It's the common cases that the working-set size of the workload is
larger than the size of the fast memory nodes. Otherwise, it's
unnecessary to use the slow memory at all. So in the common cases,
there are almost always no enough free pages in the fast memory nodes,
so that the globally hot pages in the slow memory node cannot be
promoted to the fast memory node. To solve the issue, we have 2
choices as follows,
a. Ignore the free pages watermark checking when promoting hot pages
from the slow memory node to the fast memory node. This will
create some memory pressure in the fast memory node, thus trigger
the memory reclaiming. So that, the cold pages in the fast memory
node will be demoted to the slow memory node.
b. Make kswapd of the fast memory node to reclaim pages until the free
pages are a little more (about 10MB) than the high watermark. Then,
if the free pages of the fast memory node reaches high watermark, and
some hot pages need to be promoted, kswapd of the fast memory node
will be waken up to demote some cold pages in the fast memory node to
the slow memory node. This will free some extra space in the fast
memory node, so the hot pages in the slow memory node can be
promoted to the fast memory node.
The choice "a" will create the memory pressure in the fast memory
node. If the memory pressure of the workload is high, the memory
pressure may become so high that the memory allocation latency of the
workload is influenced, e.g. the direct reclaiming may be triggered.
The choice "b" works much better at this aspect. If the memory
pressure of the workload is high, the hot pages promotion will stop
earlier because its allocation watermark is higher than that of the
normal memory allocation. So in this patch, choice "b" is
implemented.
In addition to the original page placement optimization among sockets,
the NUMA balancing mechanism is extended to be used to optimize page
placement according to hot/cold among different memory types. So the
sysctl user space interface (numa_balancing) is extended in a backward
compatible way as follow, so that the users can enable/disable these
functionality individually.
The sysctl is converted from a Boolean value to a bits field. The
definition of the flags is,
- 0x0: NUMA_BALANCING_DISABLED
- 0x1: NUMA_BALANCING_NORMAL
- 0x2: NUMA_BALANCING_MEMORY_TIERING
Signed-off-by: "Huang, Ying" <ying.huang(a)intel.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Yang Shi <shy828301(a)gmail.com>
Cc: Zi Yan <ziy(a)nvidia.com>
Cc: Wei Xu <weixugc(a)google.com>
Cc: osalvador <osalvador(a)suse.de>
Cc: Shakeel Butt <shakeelb(a)google.com>
Cc: Hasan Al Maruf <hasanalmaruf(a)fb.com>
Cc: linux-kernel(a)vger.kernel.org
Cc: linux-mm(a)kvack.org
Signed-off-by: zhongjiang-ali <zhongjiang-ali(a)linux.alibaba.com>
---
Documentation/sysctl/kernel.txt | 32 ++++++++++++++++++++++----------
include/linux/sched/sysctl.h | 11 +++++++++++
kernel/sched/core.c | 21 +++++++++++++++++----
kernel/sysctl.c | 3 ++-
mm/migrate.c | 19 +++++++++++++++++--
mm/vmscan.c | 16 ++++++++++++++++
6 files changed, 85 insertions(+), 17 deletions(-)
diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt
index 37a6795..cfc38a7b 100644
--- a/Documentation/sysctl/kernel.txt
+++ b/Documentation/sysctl/kernel.txt
@@ -493,16 +493,23 @@ to the guest kernel command line (see
Documentation/admin-guide/kernel-parameter
numa_balancing
-Enables/disables automatic page fault based NUMA memory
-balancing. Memory is moved automatically to nodes
-that access it often.
-
-Enables/disables automatic NUMA memory balancing. On NUMA machines, there
-is a performance penalty if remote memory is accessed by a CPU. When this
-feature is enabled the kernel samples what task thread is accessing memory
-by periodically unmapping pages and later trapping a page fault. At the
-time of the page fault, it is determined if the data being accessed should
-be migrated to a local memory node.
+Enables/disables and configure automatic page fault based NUMA memory
+balancing. Memory is moved automatically to nodes that access it
+often. The value to set can be the result to OR the following,
+
+= =================================
+0x0 NUMA_BALANCING_DISABLED
+0x1 NUMA_BALANCING_NORMAL
+0x2 NUMA_BALANCING_MEMORY_TIERING
+= =================================
+
+Or NUMA_BALANCING_NORMAL to optimize page placement among different
+NUMA nodes to reduce remote accessing. On NUMA machines, there is a
+performance penalty if remote memory is accessed by a CPU. When this
+feature is enabled the kernel samples what task thread is accessing
+memory by periodically unmapping pages and later trapping a page
+fault. At the time of the page fault, it is determined if the data
+being accessed should be migrated to a local memory node.
The unmapping of pages and trapping faults incur additional overhead that
ideally is offset by improved memory locality but there is no universal
@@ -513,6 +520,11 @@ faults may be controlled by the numa_balancing_scan_period_min_ms,
numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
numa_balancing_scan_size_mb, and numa_balancing_settle_count sysctls.
+Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
+different types of memory (represented as different NUMA nodes) to
+place the hot pages in the fast memory. This is implemented based on
+unmapping and page fault too.
+
==============================================================
numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms,
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index d975618b..f5e07a1 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -31,6 +31,17 @@ enum sched_tunable_scaling {
SCHED_TUNABLESCALING_LINEAR,
SCHED_TUNABLESCALING_END,
};
+
+#define NUMA_BALANCING_DISABLED 0x0
+#define NUMA_BALANCING_NORMAL 0x1
+#define NUMA_BALANCING_MEMORY_TIERING 0x2
+
+#ifdef CONFIG_NUMA_BALANCING
+extern int sysctl_numa_balancing_mode;
+#else
+#define sysctl_numa_balancing_mode 0
+#endif
+
extern enum sched_tunable_scaling sysctl_sched_tunable_scaling;
extern unsigned int sysctl_numa_balancing_scan_delay;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fd81dd3..5b78a1a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2169,7 +2169,9 @@ static void __sched_fork(unsigned long clone_flags, struct
task_struct *p)
#ifdef CONFIG_NUMA_BALANCING
-void set_numabalancing_state(bool enabled)
+int sysctl_numa_balancing_mode;
+
+static void __set_numabalancing_state(bool enabled)
{
if (enabled)
static_branch_enable(&sched_numa_balancing);
@@ -2177,13 +2179,22 @@ void set_numabalancing_state(bool enabled)
static_branch_disable(&sched_numa_balancing);
}
+void set_numabalancing_state(bool enabled)
+{
+ if (enabled)
+ sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
+ else
+ sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
+ __set_numabalancing_state(enabled);
+}
+
#ifdef CONFIG_PROC_SYSCTL
int sysctl_numa_balancing(struct ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos)
{
struct ctl_table t;
int err;
- int state = static_branch_likely(&sched_numa_balancing);
+ int state = sysctl_numa_balancing_mode;
if (write && !capable(CAP_SYS_ADMIN))
return -EPERM;
@@ -2193,8 +2204,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
if (err < 0)
return err;
- if (write)
- set_numabalancing_state(state);
+ if (write) {
+ sysctl_numa_balancing_mode = state;
+ __set_numabalancing_state(state);
+ }
return err;
}
#endif
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index ae9ccda..0420784 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -132,6 +132,7 @@
static int zero;
static int __maybe_unused one = 1;
static int __maybe_unused two = 2;
+static int __maybe_unused three = 3;
static int __maybe_unused four = 4;
static unsigned long zero_ul;
static unsigned long one_ul = 1;
@@ -453,7 +454,7 @@ static int sysrq_sysctl_handler(struct ctl_table *table, int write,
.mode = 0644,
.proc_handler = sysctl_numa_balancing,
.extra1 = &zero,
- .extra2 = &one,
+ .extra2 = &three,
},
#endif /* CONFIG_NUMA_BALANCING */
#endif /* CONFIG_SCHED_DEBUG */
diff --git a/mm/migrate.c b/mm/migrate.c
index f86a082..5214da8 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -49,6 +49,7 @@
#include <linux/ptrace.h>
#include <linux/memory.h>
#include <linux/random.h>
+#include <linux/sched/sysctl.h>
#include <asm/tlbflush.h>
@@ -1943,12 +1944,26 @@ static struct page *alloc_misplaced_dst_page(struct page *page,
static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
{
int page_lru;
+ int order = compound_order(page);
- VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
+ VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
/* Avoid migrating to a node that is nearly full */
- if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page)))
+ if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page))) {
+ int z;
+
+ if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
+ !numa_demotion_enabled)
+ return 0;
+ if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
+ return 0;
+ for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+ if (populated_zone(pgdat->node_zones + z))
+ break;
+ }
+ wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
return 0;
+ }
if (isolate_lru_page(page))
return 0;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 693fe5b..ab79a20 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -57,6 +57,7 @@
#include <linux/swapops.h>
#include <linux/balloon_compaction.h>
+#include <linux/sched/sysctl.h>
#include "internal.h"
@@ -3558,6 +3559,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int
classzone_idx)
}
/*
+ * Keep the free pages on fast memory node a little more than the high
+ * watermark to accommodate the promoted pages.
+ */
+#define NUMA_BALANCING_PROMOTE_WATERMARK (10UL * 1024 * 1024 >> PAGE_SHIFT)
+
+/*
* Returns true if there is an eligible zone balanced for the request order
* and classzone_idx
*/
@@ -3578,6 +3585,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int
classzone_idx)
continue;
mark = high_wmark_pages(zone);
+ if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+ numa_demotion_enabled &&
+ next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
+ unsigned long promote_mark;
+
+ promote_mark = min(NUMA_BALANCING_PROMOTE_WATERMARK,
+ pgdat->node_present_pages >> 6);
+ mark += promote_mark;
+ }
if (zone_watermark_ok_safe(zone, order, mark, classzone_idx))
return true;
}
--
1.8.3.1