lkml.org 
[lkml]   [2021]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v2 3/3] mm: Fix missing mem cgroup soft limit tree updates
Date
On a per node basis, the mem cgroup soft limit tree on each node tracks
how much a cgroup has exceeded its soft limit memory limit and sorts
the cgroup by its excess usage. On page release, the trees are not
updated right away, until we have gathered a batch of pages belonging to
the same cgroup. This reduces the frequency of updating the soft limit tree
and locking of the tree and associated cgroup.

However, the batch of pages could contain pages from multiple nodes but
only the soft limit tree from one node would get updated. Change the
logic so that we update the tree in batch of pages, with each batch of
pages all in the same mem cgroup and memory node. An update is issued for
the batch of pages of a node collected till now whenever we encounter
a page belonging to a different node. Note that this batching for
the same node logic is only relevant for v1 cgroup that has a memory
soft limit.

Reviewed-by: Ying Huang <ying.huang@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
mm/memcontrol.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d72449eeb85a..8bddee75f5cb 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6804,6 +6804,7 @@ struct uncharge_gather {
unsigned long pgpgout;
unsigned long nr_kmem;
struct page *dummy_page;
+ int nid;
};

static inline void uncharge_gather_clear(struct uncharge_gather *ug)
@@ -6849,7 +6850,13 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
* exclusive access to the page.
*/

- if (ug->memcg != page_memcg(page)) {
+ if (ug->memcg != page_memcg(page) ||
+ /*
+ * Update soft limit tree used in v1 cgroup in page batch for
+ * the same node. Relevant only to v1 cgroup with a soft limit.
+ */
+ (ug->dummy_page && ug->nid != page_to_nid(page) &&
+ ug->memcg->soft_limit != PAGE_COUNTER_MAX)) {
if (ug->memcg) {
uncharge_batch(ug);
uncharge_gather_clear(ug);
@@ -6869,6 +6876,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
ug->pgpgout++;

ug->dummy_page = page;
+ ug->nid = page_to_nid(page);
page->memcg_data = 0;
css_put(&ug->memcg->css);
}
--
2.20.1
\
 
 \ /
  Last update: 2021-02-17 22:47    [W:0.172 / U:0.944 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site