commit f108f1956ecbcd9c6190e0c75bc2c96cda9289c4 upstream
As Yang Shi suggested [1], it will be helpful to explain why we should
select target node randomly now if there are multiple target nodes.
[1]
https://lore.kernel.org/all/CAHbLzkqSqCL+g7dfzeOw8fPyeEC0BBv13Ny1UVGHDkadnQ…
Link:
https://lkml.kernel.org/r/c31d36bd097c6e9e69fc0f409c43b78e53e64fc2.16377668…
Signed-off-by: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Reviewed-by: Yang Shi <shy828301(a)gmail.com>
Cc: "Huang, Ying" <ying.huang(a)intel.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Zi Yan <ziy(a)nvidia.com>
Cc: zhongjiang-ali <zhongjiang-ali(a)linux.alibaba.com>
Cc: Xunlei Pang <xlpang(a)linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
mm/migrate.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/migrate.c b/mm/migrate.c
index 10c14d1..bd3d656 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1268,6 +1268,14 @@ int next_demotion_node(int node)
/*
* If there are multiple target nodes, just select one
* target node randomly.
+ *
+ * In addition, we can also use round-robin to select
+ * target node, but we should introduce another variable
+ * for node_demotion[] to record last selected target node,
+ * that may cause cache ping-pong due to the changing of
+ * last target node. Or introducing per-cpu data to avoid
+ * caching issue, which seems more complicated. So selecting
+ * target node randomly seems better until now.
*/
index = get_random_int() % target_nr;
break;
--
1.8.3.1