lkml.org 
[lkml]   [2010]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [thiscpuops upgrade 10/10] Lockless (and preemptless) fastpaths for slub
    From
    On Wed, Nov 24, 2010 at 1:51 AM, Christoph Lameter <cl@linux.com> wrote:
    > @@ -1737,23 +1770,53 @@ static __always_inline void *slab_alloc(
    >  {
    >        void **object;
    >        struct kmem_cache_cpu *c;
    > -       unsigned long flags;
    > +       unsigned long tid;
    >
    >        if (slab_pre_alloc_hook(s, gfpflags))
    >                return NULL;
    >
    > -       local_irq_save(flags);
    > +redo:
    > +       /*
    > +        * Must read kmem_cache cpu data via this cpu ptr. Preemption is
    > +        * enabled. We may switch back and forth between cpus while
    > +        * reading from one cpu area. That does not matter as long
    > +        * as we end up on the original cpu again when doing the cmpxchg.
    > +        */
    >        c = __this_cpu_ptr(s->cpu_slab);
    > +
    > +       /*
    > +        * The transaction ids are globally unique per cpu and per operation on
    > +        * a per cpu queue. Thus they can be guarantee that the cmpxchg_double
    > +        * occurs on the right processor and that there was no operation on the
    > +        * linked list in between.
    > +        */
    > +       tid = c->tid;
    > +       barrier();

    You're using a compiler barrier after every load from c->tid. Why?
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2010-11-24 08:19    [W:0.025 / U:0.560 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site