lkml.org 
[lkml]   [2020]   [Aug]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag
Hello, Michal.

> > On Wed 12-08-20 02:13:25, Thomas Gleixner wrote:
> > [...]
> > > I can understand your rationale and what you are trying to solve. So, if
> > > we can actually have a distinct GFP variant:
> > >
> > > GFP_I_ABSOLUTELY_HAVE_TO_DO_THAT_AND_I_KNOW_IT_CAN_FAIL_EARLY
> >
> > Even if we cannot make the zone->lock raw I would prefer to not
> > introduce a new gfp flag. Well we can do an alias for easier grepping
> > #define GFP_RT_SAFE 0
> >
> > that would imply nowait semantic and would exclude waking up kswapd as
> > well. If we can make wake up safe under RT then the alias would reflect
> > that without any code changes.
> >
> > The second, and the more important part, would be to bail out anytime
> > the page allocator is to take a lock which is not allowed in the current
> > RT context. Something like
> >
> > diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> > index 67a0774e080b..3ef3ac82d110 100644
> > --- a/include/linux/gfp.h
> > +++ b/include/linux/gfp.h
> > @@ -237,6 +237,9 @@ struct vm_area_struct;
> > * that subsystems start with one of these combinations and then set/clear
> > * %__GFP_FOO flags as necessary.
> > *
> > + * %GFP_RT_SAFE users can not sleep and they are running under RT atomic context
> > + * e.g. under raw_spin_lock. Failure of an allocation is to be expected.
> > + *
> > * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower
> > * watermark is applied to allow access to "atomic reserves"
> > *
> > @@ -293,6 +296,7 @@ struct vm_area_struct;
> > * version does not attempt reclaim/compaction at all and is by default used
> > * in page fault path, while the non-light is used by khugepaged.
> > */
> > +#define GFP_RT_SAFE 0
> > #define GFP_ATOMIC (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM)
> > #define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS)
> > #define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT)
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index e028b87ce294..268ae872cc2a 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2824,6 +2824,13 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
> > {
> > int i, alloced = 0;
> >
> > + /*
> > + * Hard atomic contexts are not supported by the allocator for
> > + * anything but pcp requests
> > + */
> > + if (!preemtable())
> > + return 0;
> > +
> > spin_lock(&zone->lock);
> > for (i = 0; i < count; ++i) {
> > struct page *page = __rmqueue(zone, order, migratetype,
> > @@ -3371,6 +3378,13 @@ struct page *rmqueue(struct zone *preferred_zone,
> > goto out;
> > }
> >
> > + /*
> > + * Hard atomic contexts are not supported by the allocator for high
> > + * order requests
> > + */
> > + if (WARN_ON_ONCE(!preemtable()))
> > + reurn NULL;
> > +
> > /*
> > * We most definitely don't want callers attempting to
> > * allocate greater than order-1 page units with __GFP_NOFAIL.
> > @@ -4523,6 +4537,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM)))
> > gfp_mask &= ~__GFP_ATOMIC;
> >
> > + /* Hard atomic contexts support is very limited to the fast path */
> > + if (!preemtable()) {
> > + WARN_ON_ONCE(gfp_mask != GFP_RT_SAFE);
> > + return NULL;
> > + }
> > +
> > retry_cpuset:
> > compaction_retries = 0;
> > no_progress_loops = 0;
> >
> > What do you think?
> >
> > > which is easy to grep for then having the page allocator go down to the
> > > point where zone lock gets involved is not the end of the world for
> > > RT in theory - unless that damned reality tells otherwise. :)
> > >
> > > The page allocator allocations should also have a limit on the number of
> > > pages and eventually also page order (need to stare at the code or let
> > > Michal educate me that the order does not matter).
> >
> > In practice anything but order 0 is out of question because we need
> > zone->lock for that currently. Maybe we can introduce pcp lists for
> > higher orders in the future - I have a vague recollection Mel was
> > playing with that some time ago.
> >
> > > To make it consistent the same GFP_ variant should allow the slab
> > > allocator go to the point where the slab cache is exhausted.
> > >
> > > Having a distinct and clearly defined GFP_ variant is really key to
> > > chase down offenders and to make reviewers double check upfront why this
> > > is absolutely required.
> >
> > Having a high level and recognizable gfp mask is OK but I would really
> > like not to introduce a dedicated flag. The page allocator should be
> > able to recognize the context which cannot be handled.
> >
> Sorry for jumping in. We can rely on preemptable() for sure, if CONFIG_PREEMPT_RT
> is enabled, something like below:
>
> if (IS_ENABLED_RT && preemptebale())
>
I was a bit out of focus and did not mention about one thing. Identifying the context
type using preemtable() primitives looks a bit not suitable, IMHO. GFP_* flags are
not supposed to identify a context type, because it is not possible for all cases.
But that i

Also, to bail out based on a context's type will not allow to get a page from atomic
contexts, what we try to achieve :)

Whereas, i mean, we do have possibility to do lock-less per-cpu-list allocation without
touching any zone lock.

if (gfp_mask == 0) {
check_pcp_lists();
if (page)
return page;

bail out here without doing farther logic, like pre-fetching.
return NULL;
}

For example consider our case:
kfree_rcu()-> raw_spin_lock() -> page_alloc-> preemtable() -> false

--
Vlad Rezki

\
 
 \ /
  Last update: 2020-08-13 15:30    [W:0.536 / U:0.096 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site