lkml.org 
[lkml]   [2010]   [Jan]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] introduce sys_membarrier(): process-wide memory barrier (v5)
* Peter Zijlstra (peterz@infradead.org) wrote:
> On Tue, 2010-01-19 at 20:06 +0100, Peter Zijlstra wrote:
> >
> > We could possibly look at placing that assignment in context_switch()
> > between switch_mm() and switch_to(), which should provide a mb before
> > and after I think, Ingo?
>
> Right, just found out why we cannot do that, the first thing
> context_switch() does is prepare_context_switch() which includes
> prepare_lock_switch() which on __ARCH_WANT_UNLOCKED_CTXSW machines drops
> the rq->lock, and we have to have rq->curr assigned by then.
>

OK.

One efficient way to fit the requirement of sys_membarrier() would be to
create spin_lock_mb()/spin_unlock_mb(), which would have full memory
barriers rather than the acquire/release semantic. These could be used
within schedule() execution. On UP, they would turn into preempt off/on
and a compiler barrier, just like normal spin locks.

On architectures like x86, the atomic instructions already imply a full
memory barrier, so we have a direct mapping and no overhead. On
architecture where the spin lock only provides acquire semantic (e.g.
powerpc using lwsync and isync), then we would have to create an
alternate implementation with "sync".

We can even create a generic fallback with the following kind of code in
the meantime:

static inline void spin_lock_mb(spinlock_t *lock)
{
spin_lock(&lock);
smp_mb();
}

static inline void spin_unlock_mb(spinlock_t *lock)
{
smp_mb();
spin_unlock(&lock);
}

How does that sound ?

Mathieu


--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68


\
 
 \ /
  Last update: 2010-01-21 17:15    [W:0.088 / U:0.468 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site