lkml.org 
[lkml]   [2017]   [May]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH tip/core/rcu 20/88] atomics: Add header comment so spin_unlock_wait()
    Date
    There is material describing the ordering guarantees provided by
    spin_unlock_wait(), but it is not necessarily easy to find. This commit
    therefore adds a docbook header comment to this function informally
    describing its semantics.

    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Acked-by: Peter Zijlstra <peterz@infradead.org>
    ---
    include/linux/spinlock.h | 20 ++++++++++++++++++++
    1 file changed, 20 insertions(+)

    diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
    index 59248dcc6ef3..d9510e8522d4 100644
    --- a/include/linux/spinlock.h
    +++ b/include/linux/spinlock.h
    @@ -369,6 +369,26 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
    raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
    })

    +/**
    + * spin_unlock_wait - Interpose between successive critical sections
    + * @lock: the spinlock whose critical sections are to be interposed.
    + *
    + * Semantically this is equivalent to a spin_lock() immediately
    + * followed by a spin_unlock(). However, most architectures have
    + * more efficient implementations in which the spin_unlock_wait()
    + * cannot block concurrent lock acquisition, and in some cases
    + * where spin_unlock_wait() does not write to the lock variable.
    + * Nevertheless, spin_unlock_wait() can have high overhead, so if
    + * you feel the need to use it, please check to see if there is
    + * a better way to get your job done.
    + *
    + * The ordering guarantees provided by spin_unlock_wait() are:
    + *
    + * 1. All accesses preceding the spin_unlock_wait() happen before
    + * any accesses in later critical sections for this same lock.
    + * 2. All accesses following the spin_unlock_wait() happen after
    + * any accesses in earlier critical sections for this same lock.
    + */
    static __always_inline void spin_unlock_wait(spinlock_t *lock)
    {
    raw_spin_unlock_wait(&lock->rlock);
    --
    2.5.2
    \
     
     \ /
      Last update: 2017-05-26 00:24    [W:4.133 / U:0.544 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site