lkml.org 
[lkml]   [2009]   [Oct]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [this_cpu_xx V5 19/19] SLUB: Experimental new fastpath w/o interrupt disable
On Thu, 8 Oct 2009, Mathieu Desnoyers wrote:

> > Index: linux-2.6/mm/slub.c
> > ===================================================================
> > --- linux-2.6.orig/mm/slub.c 2009-10-08 11:35:59.000000000 -0500
> > +++ linux-2.6/mm/slub.c 2009-10-08 14:03:22.000000000 -0500
> > @@ -1606,7 +1606,14 @@ static void *__slab_alloc(struct kmem_ca
> > unsigned long addr)
> > {
> > void **object;
> > - struct page *page = __this_cpu_read(s->cpu_slab->page);
> > + struct page *page;
> > + unsigned long flags;
> > + int hotpath;
> > +
> > + local_irq_save(flags);
>
> (Recommend adding)
>
> preempt_enable_no_resched();
>
>
> The preempt enable right in the middle of a big function is adding an
> unnecessary barrier(), which will restrain gcc from doing its
> optimizations. This might hurt performances.

In the middle of the function we have determine that we have to go to the
page allocator to get more memory. There is not much the compiler can do
to speed that up.

> I still recommend the preempt_enable_no_resched() at the beginning of
> __slab_alloc(), and simply putting a check_resched() here (which saves
> us the odd compiler barrier in the middle of function).

Then preemption would be unnecessarily disabled for the page allocator
call?

> > if (gfpflags & __GFP_WAIT)
> > local_irq_enable();
> >
> > + preempt_enable();
>
> We could replace the above by:
>
> if (gfpflags & __GFP_WAIT) {
> local_irq_enable();
> preempt_check_resched();
> }

Which would leave preempt off for the page allocator.

> > + irqsafe_cpu_inc(s->cpu_slab->active);
> > + barrier();
> > object = __this_cpu_read(s->cpu_slab->freelist);
> > - if (unlikely(!object || !node_match(s, node)))
> > + if (unlikely(!object || !node_match(s, node) ||
> > + __this_cpu_read(s->cpu_slab->active)))
>
> Missing a barrier() here ?

The modifications of the s->cpu_slab->freelist in __slab_alloc() are only
done after interrupts have been disabled and after the slab has been locked.

> The idea is to let gcc know that "active" inc/dec and "freelist" reads
> must never be reordered. Even when the decrement is done in the slow
> path branch.

Right. How could that occur with this code?

> > + preempt_enable();
> > stat(s, FREE_FASTPATH);
> > - } else
> > + } else {
>
> Perhaps missing a barrier() in the else ?

Not sure why that would be necessary. __slab_free() does not even touch
s->cpu_slab->freelist if you have the same reasons as in the alloc path.




\
 
 \ /
  Last update: 2009-10-08 23:25    [W:1.347 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site