lkml.org 
[lkml]   [2016]   [Nov]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: INFO: rcu_sched detected stalls on CPUs/tasks with `kswapd` and `mem_cgroup_shrink_node`
On Mon 21-11-16 06:01:22, Paul E. McKenney wrote:
> On Mon, Nov 21, 2016 at 02:41:31PM +0100, Michal Hocko wrote:
[...]
> > To the patch. I cannot say I would like it. cond_resched_rcu_qs sounds
> > way too lowlevel for this usage. If anything cond_resched somewhere inside
> > mem_cgroup_iter would be more appropriate to me.
>
> Like this?
>
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ae052b5e3315..81cb30d5b2fc 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -867,6 +867,7 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root,
> out:
> if (prev && prev != root)
> css_put(&prev->css);
> + cond_resched_rcu_qs();

I still do not understand why should we play with _rcu_qs at all and a
regular cond_resched is not sufficient. Anyway I would have to double
check whether we can do cond_resched in the iterator. I do not remember
having users which are atomic but I might be easily wrong here. Before
we touch this code, though, I would really like to understand what is
actually going on here because as I've already pointed out we should
have some resched points in the reclaim path.

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2016-11-21 15:20    [W:3.270 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site