lkml.org 
[lkml]   [2020]   [Aug]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3] mm: memcg: Fix memcg reclaim soft lockup
On Thu, Aug 27, 2020 at 10:32:29AM +0800, Xunlei Pang wrote:
> We've met softlockup with "CONFIG_PREEMPT_NONE=y", when
> the target memcg doesn't have any reclaimable memory.
>
> It can be easily reproduced as below:
> watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204]
> CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
> Call Trace:
> shrink_lruvec+0x49f/0x640
> shrink_node+0x2a6/0x6f0
> do_try_to_free_pages+0xe9/0x3e0
> try_to_free_mem_cgroup_pages+0xef/0x1f0
> try_charge+0x2c1/0x750
> mem_cgroup_charge+0xd7/0x240
> __add_to_page_cache_locked+0x2fd/0x370
> add_to_page_cache_lru+0x4a/0xc0
> pagecache_get_page+0x10b/0x2f0
> filemap_fault+0x661/0xad0
> ext4_filemap_fault+0x2c/0x40
> __do_fault+0x4d/0xf9
> handle_mm_fault+0x1080/0x1790
>
> It only happens on our 1-vcpu instances, because there's no chance
> for oom reaper to run to reclaim the to-be-killed process.
>
> Add a cond_resched() at the upper shrink_node_memcgs() to solve this
> issue, this will mean that we will get a scheduling point for each
> memcg in the reclaimed hierarchy without any dependency on the
> reclaimable memory in that memcg thus making it more predictable.
>
> Acked-by: Chris Down <chris@chrisdown.name>
> Acked-by: Michal Hocko <mhocko@suse.com>
> Suggested-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Xunlei Pang <xlpang@linux.alibaba.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

\
 
 \ /
  Last update: 2020-08-27 16:48    [W:0.043 / U:1.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site