lkml.org 
[lkml]   [2015]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] qspinlock: Improve performance by reducing load instruction rollback
On Mon, Oct 19, 2015 at 10:27:22AM +0800, ling.ma.program@gmail.com wrote:
> From: Ma Ling <ling.ml@alibaba-inc.com>
>
> All load instructions can run speculatively but they have to follow
> memory order rule in multiple cores as below:
> _x = _y = 0
>
> Processor 0 Processor 1
>
> mov r1, [ _y] //M1 mov [ _x], 1 //M3
> mov r2, [ _x] //M2 mov [ _y], 1 //M4
>
> If r1 = 1, r2 must be 1
>
> In order to guarantee above rule, although Processor 0 execute
> M1 and M2 instruction out of order, they are kept in ROB,
> when load buffer for _x in Processor 0 received the update
> message from Processor 1, Processor 0 need to roll back
> from M2 instruction, which will flush the whole pipeline,
> the latency is over the penalty from branch prediction miss.
>
> In this patch we use lock cmpxchg instruction to force load

"lock cmpxchg" makes me think you're working on x86.

> instructions to be serialization,

smp_rmb() does that, and that's 'free' on x86. Because x86 doesn't do
read reordering.

> the destination operand
> receives a write cycle without regard to the result of
> the comparison, which can help us to reduce the penalty
> from load instruction roll back.

And that makes me think I'm not understanding what you're getting at. If
you need to force memory order, a "fence" (or smp_mb()) would still be
cheaper than endlessly pulling the line into exclusive state for no
reason, right?

> Our experiment indicates the performance can be improved by 10%~15%
> for 2 and 3 threads cases, the conflicts from lock cache line
> spend them most of the time.

That just doesn't parse, what?


\
 
 \ /
  Last update: 2015-10-19 12:01    [W:0.157 / U:0.604 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site