lkml.org 
[lkml]   [2014]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
On Thu, Apr 17, 2014 at 11:03:57AM -0400, Waiman Long wrote:
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> + struct __qspinlock *l = (void *)lock;
> +
> + ACCESS_ONCE(l->locked_pending) = 1;
> +}

> @@ -157,8 +251,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> * we're pending, wait for the owner to go away.
> *
> * *,1,1 -> *,1,0
> + *
> + * this wait loop must be a load-acquire such that we match the
> + * store-release that clears the locked bit and create lock
> + * sequentiality; this because not all try_clear_pending_set_locked()
> + * implementations imply full barriers.

You renamed the function referred in the above comment.

> */
> - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> + while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
> arch_mutex_cpu_relax();
>
> /*


\
 
 \ /
  Last update: 2014-04-18 00:01    [W:0.299 / U:0.556 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site