lkml.org 
[lkml]   [2009]   [Jul]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[PATCHv5 2/2] memory barrier: adding smp_mb__after_lock
    Adding smp_mb__after_lock define to be used as a smp_mb call after
    a lock.

    Making it nop for x86, since {read|write|spin}_lock() on x86 are
    full memory barriers.

    wbr,
    jirka


    Signed-off-by: Jiri Olsa <jolsa@redhat.com>

    ---
    arch/x86/include/asm/spinlock.h | 3 +++
    include/linux/spinlock.h | 5 +++++
    include/net/sock.h | 5 ++++-
    3 files changed, 12 insertions(+), 1 deletions(-)

    diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
    index b7e5db8..39ecc5f 100644
    --- a/arch/x86/include/asm/spinlock.h
    +++ b/arch/x86/include/asm/spinlock.h
    @@ -302,4 +302,7 @@ static inline void __raw_write_unlock(raw_rwlock_t *rw)
    #define _raw_read_relax(lock) cpu_relax()
    #define _raw_write_relax(lock) cpu_relax()

    +/* The {read|write|spin}_lock() on x86 are full memory barriers. */
    +#define smp_mb__after_lock() do { } while (0)
    +
    #endif /* _ASM_X86_SPINLOCK_H */
    diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
    index 252b245..ae053bd 100644
    --- a/include/linux/spinlock.h
    +++ b/include/linux/spinlock.h
    @@ -132,6 +132,11 @@ do { \
    #endif /*__raw_spin_is_contended*/
    #endif

    +/* The lock does not imply full memory barrier. */
    +#ifndef smp_mb__after_lock
    +#define smp_mb__after_lock() smp_mb()
    +#endif
    +
    /**
    * spin_unlock_wait - wait until the spinlock gets unlocked
    * @lock: the spinlock in question.
    diff --git a/include/net/sock.h b/include/net/sock.h
    index 4eb8409..98afcd9 100644
    --- a/include/net/sock.h
    +++ b/include/net/sock.h
    @@ -1271,6 +1271,9 @@ static inline int sk_has_allocations(const struct sock *sk)
    * in its cache, and so does the tp->rcv_nxt update on CPU2 side. The CPU1
    * could then endup calling schedule and sleep forever if there are no more
    * data on the socket.
    + *
    + * The sk_has_helper is always called right after a call to read_lock, so we
    + * can use smp_mb__after_lock barrier.
    */
    static inline int sk_has_sleeper(struct sock *sk)
    {
    @@ -1280,7 +1283,7 @@ static inline int sk_has_sleeper(struct sock *sk)
    *
    * This memory barrier is paired in the sock_poll_wait.
    */
    - smp_mb();
    + smp_mb__after_lock();
    return sk->sk_sleep && waitqueue_active(sk->sk_sleep);
    }


    \
     
     \ /
      Last update: 2009-07-03 10:17    [W:0.023 / U:61.724 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site