lkml.org 
[lkml]   [2017]   [Aug]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/2] mm/slub: wake up kswapd for initial high order allocation
From
Date
On 08/28/2017 03:11 AM, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> slub uses higher order allocation than it actually needs. In this case,
> we don't want to do direct reclaim to make such a high order page since
> it causes a big latency to the user. Instead, we would like to fallback
> lower order allocation that it actually needs.
>
> However, we also want to get this higher order page in the next time
> in order to get the best performance and it would be a role of
> the background thread like as kswapd and kcompactd. To wake up them,
> we should not clear __GFP_KSWAPD_RECLAIM.
>
> Unlike this intention, current code clears __GFP_KSWAPD_RECLAIM so fix it.
>
> Note that this patch does some clean up, too.
> __GFP_NOFAIL is cleared twice so remove one.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Hm, so this seems to revert Mel's 444eb2a449ef ("mm: thp: set THP defrag
by default to madvise and add a stall-free defrag option") wrt the slub
allocate_slab() part. AFAICS the intention in Mel's patch was that he
removed a special case in __alloc_page_slowpath() where including
__GFP_THISNODE and lacking ~__GFP_DIRECT_RECLAIM effectively means also
lacking __GFP_KSWAPD_RECLAIM. The commit log claims that slab/slub might
change behavior so he moved the removal of __GFP_KSWAPD_RECLAIM to them.

But AFAICS, only slab uses __GFP_THISNODE, while slub doesn't. So your
patch would indeed revert an unintentional change of Mel's commit. Is it
right or do I miss something?

> ---
> mm/slub.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 0dc7397..e1e442c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1578,8 +1578,12 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> * so we fall-back to the minimum order allocation.
> */
> alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL;
> - if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->min))
> - alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~(__GFP_RECLAIM|__GFP_NOFAIL);
> + if (oo_order(oo) > oo_order(s->min)) {
> + if (alloc_gfp & __GFP_DIRECT_RECLAIM) {
> + alloc_gfp |= __GFP_NOMEMALLOC;
> + alloc_gfp &= ~__GFP_DIRECT_RECLAIM;
> + }
> + }
>
> page = alloc_slab_page(s, alloc_gfp, node, oo);
> if (unlikely(!page)) {
>

\
 
 \ /
  Last update: 2017-08-28 12:05    [W:0.099 / U:1.384 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site