lkml.org 
[lkml]   [2010]   [Jan]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC PATCH] introduce sys_membarrier(): process-wide memory barrier (v5)
    * Mathieu Desnoyers (mathieu.desnoyers@polymtl.ca) wrote:
    > * Peter Zijlstra (peterz@infradead.org) wrote:
    > > On Thu, 2010-01-14 at 11:26 -0500, Mathieu Desnoyers wrote:
    > >
    > > > It's this scenario that is causing problem. Let's consider this
    > > > execution:
    > > >
    >
    > (slightly augmented)
    >
    > CPU 0 (membarrier) CPU 1 (another mm -> our mm)
    > <user-space>
    > <kernel-space>
    > switch_mm()
    > smp_mb()
    > clear_mm_cpumask()
    > set_mm_cpumask()
    > smp_mb() (by load_cr3() on x86)
    > switch_to()
    > memory access before membarrier
    > <call sys_membarrier()>
    > smp_mb()
    > mm_cpumask includes CPU 1
    > rcu_read_lock()
    > if (CPU 1 mm != our mm)
    > skip CPU 1.
    > rcu_read_unlock()
    > smp_mb()
    > <return to user-space>
    > current = next (1)
    > <switch back to user-space>
    > urcu read lock()
    > read gp
    > store local gp (2)
    > barrier()
    > access critical section data (3)
    > memory access after membarrier
    >
    > So if we don't have any memory barrier between (1) and (3), the memory
    > operations can be reordered in such a way that CPU 0 will not send IPI
    > to a CPU that would need to have it's barrier() promoted into a
    > smp_mb().
    >
    > >
    > > I'm still not getting it, sure we don't send an IPI, but it will have
    > > done an mb() in switch_mm() to become our mm, so even without the IPI it
    > > will have executed that mb we were after.
    >
    > The augmented race window above shows that it would be possible for (2)
    > and (3) to be reordered across the barrier(), and therefore the critical
    > section access could spill over a rcu-unlocked state.

    To make this painfully clear, I'll reorder the accesses to match that of
    the CPU to memory:

    CPU 0 (membarrier) CPU 1 (another mm -our mm)
    <user-space>
    <kernel-space>
    switch_mm()
    smp_mb()
    clear_mm_cpumask()
    set_mm_cpumask()
    smp_mb() (by load_cr3() on x86)
    switch_to()
    <buffered current = next>
    <switch back to user-space>
    urcu read lock()
    access critical section data (3)
    memory access before membarrier
    <call sys_membarrier()>
    smp_mb()
    mm_cpumask includes CPU 1
    rcu_read_lock()
    if (CPU 1 mm != our mm)
    skip CPU 1.
    rcu_read_unlock()
    smp_mb()
    <return to user-space>
    memory access after membarrier
    current = next (1) (buffer flush)
    read gp
    store local gp (2)

    This should make the problem a bit more evident. Access (3) is done
    outside of the read-side C.S. as far as the userspace synchronize_rcu()
    is concerned.

    Thanks,

    Mathieu


    --
    Mathieu Desnoyers
    OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68


    \
     
     \ /
      Last update: 2010-01-14 19:39    [W:0.028 / U:0.016 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site