[lkml]   [2020]   [Dec]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: [v2 PATCH] mm: list_lru: set shrinker map bit when child nr_items is not zero
On Tue, Dec 01, 2020 at 09:44:49AM -0800, Yang Shi wrote:
> When investigating a slab cache bloat problem, significant amount of
> negative dentry cache was seen, but confusingly they neither got shrunk
> by reclaimer (the host has very tight memory) nor be shrunk by dropping
> cache. The vmcore shows there are over 14M negative dentry objects on lru,
> but tracing result shows they were even not scanned at all. The further
> investigation shows the memcg's vfs shrinker_map bit is not set. So the
> reclaimer or dropping cache just skip calling vfs shrinker. So we have
> to reboot the hosts to get the memory back.
> I didn't manage to come up with a reproducer in test environment, and the
> problem can't be reproduced after rebooting. But it seems there is race
> between shrinker map bit clear and reparenting by code inspection. The
> hypothesis is elaborated as below.
> The memcg hierarchy on our production environment looks like:
> root
> / \
> system user
> The main workloads are running under user slice's children, and it creates
> and removes memcg frequently. So reparenting happens very often under user
> slice, but no task is under user slice directly.
> So with the frequent reparenting and tight memory pressure, the below
> hypothetical race condition may happen:
> reparent
> dst->nr_items == 0
> shrinker:
> total_objects == 0
> add src->nr_items to dst
> set_bit
> clear_bit
> child memcg offline
> replace child's kmemcg_id to
> parent's (in memcg_offline_kmem())
> list_lru_del()
> see parent's kmemcg_id
> dec dst->nr_items
> reparent again
> dst->nr_items may go negative
> due to concurrent list_lru_del()
> on CPU C
> The second run of shrinker:
> read nr_items without any
> synchronization, so it may
> see intermediate negative
> nr_items then total_objects
> may return 0 conincidently
> keep the bit cleared
> dst->nr_items != 0
> skip set_bit
> add scr->nr_item to dst
> After this point dst->nr_item may never go zero, so reparenting will not
> set shrinker_map bit anymore. And since there is no task under user
> slice directly, so no new object will be added to its lru to set the
> shrinker map bit either. That bit is kept cleared forever.
> How does list_lru_del() race with reparenting? It is because
> reparenting replaces childen's kmemcg_id to parent's without protecting
> from nlru->lock, so list_lru_del() may see parent's kmemcg_id but
> actually deleting items from child's lru, but dec'ing parent's nr_items,
> so the parent's nr_items may go negative as commit
> 2788cf0c401c268b4819c5407493a8769b7007aa ("memcg: reparent list_lrus and
> free kmemcg_id on css offline") says.
> Can we move kmemcg_id replacement after reparenting? No, because the
> race with list_lru_del() may result in negative src->nr_items, but it
> will never be fixed. So the shrinker may never return SHRINK_EMPTY then
> keep the shrinker map bit set always. The shrinker will be always
> called for nonsense.
> Can we synchronize list_lru_del() and reparenting? Yes, it could be
> done. But it seems we need introduce a new lock or use nlru->lock. But
> it sounds complicated to move kmemcg_id replacement code under nlru->lock.
> And list_lru_del() may be called quite often to exacerbate some hot
> path, i.e. dentry kill.
> Since it is impossible that dst->nr_items goes negative and
> src->nr_items goes zero at the same time, so it seems we could set the
> shrinker map bit iff src->nr_items != 0. We could synchronize
> list_lru_count_one() and reparenting with nlru->lock, but it seems
> checking src->nr_items in reparenting is the simplest and avoids lock
> contention.
> Fixes: fae91d6d8be5 ("mm/list_lru.c: set bit in memcg shrinker bitmap on first list_lru item appearance")
> Suggested-by: Roman Gushchin <>
> Cc: Vladimir Davydov <>
> Cc: Kirill Tkhai <>
> Cc: Shakeel Butt <>
> Cc: <> v4.19+
> Signed-off-by: Yang Shi <>

Hi Yang!

Code-wise it looks good to me. Thank you for updating!

I think the commit log can be simplified a bit: you don't really need 3 CPUs
to reproduce the problem. Also, IMO the section about fixing the problem by
introducing an additional synchronization can be dropped, but it's up to you.

With the updated commit log, please feel to add
Reviewed-by: Roman Gushchin <> .

Thank you!

 \ /
  Last update: 2020-12-01 20:56    [W:0.037 / U:3.764 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site