lkml.org 
[lkml]   [2020]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC] rcu/tree: Use GFP_MEMALLOC for alloc memory to free memory pattern
On Tue, Mar 31, 2020 at 04:04:33PM +0200, Uladzislau Rezki wrote:
> On Tue, Mar 31, 2020 at 09:16:28AM -0400, Joel Fernandes (Google) wrote:
> > In kfree_rcu() headless implementation (where the caller need not pass
> > an rcu_head, but rather directly pass a pointer to an object), we have
> > a fall-back where we allocate a rcu_head wrapper for the caller (not the
> > common case). This brings the pattern of needing to allocate some memory
> > to free some memory. Currently we use GFP_ATOMIC flag to try harder for
> > this allocation, however the GFP_MEMALLOC flag is more tailored to this
> > pattern. We need to try harder not only during atomic context, but also
> > during non-atomic context anyway. So use the GFP_MEMALLOC flag instead.
> >
> > Also remove the __GFP_NOWARN flag simply because although we do have a
> > synchronize_rcu() fallback for absolutely worst case, we still would
> > like to not enter that path and atleast trigger a warning to the user.
> >
> > Cc: linux-mm@kvack.org
> > Cc: rcu@vger.kernel.org
> > Cc: willy@infradead.org
> > Cc: peterz@infradead.org
> > Cc: neilb@suse.com
> > Cc: vbabka@suse.cz
> > Cc: mgorman@suse.de
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > ---
> >
> > This patch is based on the (not yet upstream) code in:
> > git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (branch rcu/kfree)
> >
> > It is a follow-up to the posted series:
> > https://lore.kernel.org/lkml/20200330023248.164994-1-joel@joelfernandes.org/
> >
> >
> > kernel/rcu/tree.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 4be763355c9fb..965deefffdd58 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -3149,7 +3149,7 @@ static inline struct rcu_head *attach_rcu_head_to_object(void *obj)
> >
> > if (!ptr)
> > ptr = kmalloc(sizeof(unsigned long *) +
> > - sizeof(struct rcu_head), GFP_ATOMIC | __GFP_NOWARN);
> > + sizeof(struct rcu_head), GFP_MEMALLOC);
> >
> Hello, Joel
>
> I have some questions regarding improving it, see below them:
>
> Do you mean __GFP_MEMALLOC? Can that flag be used in atomic context?
> Actually we do allocate there under spin lock. Should be combined with
> GFP_ATOMIC | __GFP_MEMALLOC?

Yes, I mean __GFP_MEMALLOC. Sorry, the patch was just to show the idea and
marked as RFC.

Good point on the atomic aspect of this path, you are right we cannot sleep.
I believe the GFP_NOWAIT I mentioned in my last reply will take care of that?

> As for removing __GFP_NOWARN. Actually it is expectable that an
> allocation can fail, if so we follow last emergency case. You
> can see the trace but what would you do with that information?

Yes, the benefit of the trace/warning is that the user can switch to a
non-headless API and avoid the synchronize_rcu(), that would help them get
faster kfree_rcu() performance instead of having silent slowdowns.

It also tells us whether the headless API is worth it in the long run, I
think it is worth it because we will likely never hit the synchronize_rcu()
failsafe. But if we hit it a lot, at least it wont happen silently.

Paul was concerned about following scenario with hitting synchronize_rcu():
1. Consider a system under memory pressure.
2. Consider some other subsystem X depending on another system Y which uses
kfree_rcu(). If Y doesn't complete the operation in time, X accumulates
more memory.
3. Since kfree_rcu() on Y hits synchronize_rcu() a lot, it slows it down.
This causes X to further allocate memory, further causing a chain
reaction.
Paul, please correct me if I'm wrong.

thanks,

- Joel

\
 
 \ /
  Last update: 2020-03-31 17:09    [W:0.125 / U:0.652 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site