lkml.org 
[lkml]   [2017]   [Jan]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: __GFP_REPEAT usage in fq_alloc_node
On Fri 06-01-17 07:39:14, Eric Dumazet wrote:
> On Fri, Jan 6, 2017 at 7:20 AM, Michal Hocko <mhocko@kernel.org> wrote:
> >
> > Hi Eric,
> > I am currently checking kmalloc with vmalloc fallback users and convert
> > them to a new kvmalloc helper [1]. While I am adding a support for
> > __GFP_REPEAT to kvmalloc [2] I was wondering what is the reason to use
> > __GFP_REPEAT in fq_alloc_node in the first place. c3bd85495aef
> > ("pkt_sched: fq: more robust memory allocation") doesn't mention
> > anything. Could you clarify this please?
> >
> > Thanks!
>
> I guess this question applies to all __GFP_REPEAT usages in net/ ?

I am _currently_ interested only in those which have vmalloc fallback
and cannot see more of them. Maybe my git grep foo needs some help.

> At the time, tests on the hardware I had in my labs showed that
> vmalloc() could deliver pages spread
> all over the memory and that was a small penalty (once memory is
> fragmented enough, not at boot time)

I see. Then I will go with kvmalloc with __GFP_REPEAT and we can drop
the flag later after it is not needed anymore. See the patch below.

Thanks for the clarification.

> I guess this wont be anymore a concern if I can finish my pending work
> about vmalloc() trying to get adjacent pages
> https://lkml.org/lkml/2016/12/21/285

I see

Thanks!
---
From 1f3769de85c18aa0796f215cffdb01a2e70d2d2f Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Fri, 6 Jan 2017 17:03:31 +0100
Subject: [PATCH] net_sched: use kvmalloc rather than opencoded variant

fq_alloc_node opencodes kmalloc with vmalloc fallback. Use the kvmalloc
variant instead. Keep the __GFP_REPEAT flag based on explanation from
Eric:
"
At the time, tests on the hardware I had in my labs showed that
vmalloc() could deliver pages spread all over the memory and that was a
small penalty (once memory is fragmented enough, not at boot time)
"

Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
net/sched/sch_fq.c | 12 +-----------
1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index a4f738ac7728..594f77d89f6c 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -624,16 +624,6 @@ static void fq_rehash(struct fq_sched_data *q,
q->stat_gc_flows += fcnt;
}

-static void *fq_alloc_node(size_t sz, int node)
-{
- void *ptr;
-
- ptr = kmalloc_node(sz, GFP_KERNEL | __GFP_REPEAT | __GFP_NOWARN, node);
- if (!ptr)
- ptr = vmalloc_node(sz, node);
- return ptr;
-}
-
static void fq_free(void *addr)
{
kvfree(addr);
@@ -650,7 +640,7 @@ static int fq_resize(struct Qdisc *sch, u32 log)
return 0;

/* If XPS was setup, we can allocate memory on right NUMA node */
- array = fq_alloc_node(sizeof(struct rb_root) << log,
+ array = kvmalloc_node(sizeof(struct rb_root) << log, GFP_KERNEL | __GFP_REPEAT,
netdev_queue_numa_node_read(sch->dev_queue));
if (!array)
return -ENOMEM;
--
2.11.0

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2017-01-06 17:08    [W:0.105 / U:0.316 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site