lkml.org 
[lkml]   [2011]   [Nov]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v3 4/5] slub: Only IPI CPUs that have per cpu obj to flush
From
On Sun, Nov 13, 2011 at 10:57 PM, Gilad Ben-Yossef <gilad@benyossef.com> wrote:
> On Sun, Nov 13, 2011 at 2:20 PM, Hillf Danton <dhillf@gmail.com> wrote:
>>
>> On Sun, Nov 13, 2011 at 6:17 PM, Gilad Ben-Yossef <gilad@benyossef.com> wrote:
>>
> ...
>>
>> > diff --git a/mm/slub.c b/mm/slub.c
>> > index 7d2a996..caf4b3a 100644
>> > --- a/mm/slub.c
>> > +++ b/mm/slub.c
>> > @@ -2006,7 +2006,20 @@ static void flush_cpu_slab(void *d)
>> >
>> >  static void flush_all(struct kmem_cache *s)
>> >  {
>> > -       on_each_cpu(flush_cpu_slab, s, 1);
>> > +       cpumask_var_t cpus;
>> > +       struct kmem_cache_cpu *c;
>> > +       int cpu;
>> > +
>> > +       if (likely(zalloc_cpumask_var(&cpus, GFP_ATOMIC))) {
>>
>> Perhaps, the technique of local_cpu_mask defined in kernel/sched_rt.c
>> could be used to replace the above atomic allocation.
>>
>
> Thank you for taking the time to review my patch :-)
>
> That is indeed the direction I went with inthe previous iteration of
> this patch, with the small change that because of observing that the
> allocation will only actually occurs for CPUMASK_OFFSTACK=y which by
> definition are systems with lots and lots of CPUs and, it is actually
> better to allocate the cpumask per kmem_cache rather then per CPU,
> since on system where it matters we are bound to have more CPUs (e.g.
> 4096) then kmem_caches (~160). See
> https://lkml.org/lkml/2011/10/23/151.
>
> I then went a head and further optimized the code to only incur the
> memory overhead of allocating those cpumasks for CPUMASK_OFFSTACK=y
> systems. See https://lkml.org/lkml/2011/10/23/152.
>
> As you can see from the discussion that evolved, there seems to be an
> agreement that the code complexity overhead involved is simply not
> worth it for what is, unlike sched_rt, a rather esoteric case and one
> where allocation failure is easily dealt with.
>
Even with the introduced overhead of allocation, IPIs could not go down
as much as we wish, right?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2011-11-14 14:21    [W:0.034 / U:0.772 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site