lkml.org 
[lkml]   [2015]   [Oct]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH tip/locking/core v4 1/6] powerpc: atomic: Make *xchg and *cmpxchg a full barrier
On Mon, Oct 26, 2015 at 02:20:21PM +1100, Paul Mackerras wrote:
> On Wed, Oct 21, 2015 at 10:18:33AM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote:
> > > I am not seeing a sync there, but I really have to defer to the
> > > maintainers on this one. I could easily have missed one.
> >
> > So x86 implies a full barrier for everything that changes the CPL; and
> > some form of implied ordering seems a must if you change the privilege
> > level unless you tag every single load/store with the priv level at that
> > time, which seems the more expensive option.
> >
> > So I suspect the typical implementation will flush all load/stores,
> > change the effective priv level and continue.
> >
> > This can of course be implemented at a pure per CPU ordering (RCpc),
> > which would be in line with the rest of Power, in which case you do
> > indeed need an explicit sync to make it visible to other CPUs.
>
> Right - interrupts and returns from interrupt are context
> synchronizing operations, which means they wait until all outstanding
> instructions have got to the point where they have reported any
> exceptions they're going to report, which means in turn that loads and
> stores have completed address translation. But all of that doesn't
> imply anything about the visibility of the loads and stores.
>
> There is a full barrier in the context switch path, but not in the
> system call entry/exit path.
>

Thank you, Paul. That's much clear now ;-)

Regards,
Boqun
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2015-10-26 10:21    [W:0.121 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site