[lkml]   [2010]   [Feb]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
On Mon, 2010-02-01 at 18:33 +1100, Nick Piggin wrote:
> > Adds no overhead on x86, because LOCK-prefixed atomic operations of the spin
> > lock/unlock already imply a full memory barrier. Combines the spin lock
> > acquire/release barriers with the full memory barrier to diminish the
> > performance impact on other architectures. (per-architecture spinlock-mb.h
> > should be gradually implemented to replace the generic version)
> It does add overhead on x86, as well as most other architectures.
> This really seems like the wrong optimisation to make, especially
> given that there's not likely to be much using librcu yet, right?
> I'd go with the simpler and safer version of sys_membarrier that does
> not do tricky synchronisation or add overhead to the ctxsw fastpath.
> Then if you see some actual improvement in a real program using librcu
> one day we can discuss making it faster.
> As it is right now, the change will definitely slow down everybody
> not using librcu (ie. nearly everything).

Right, so the problem with the 'slow'/'safe' version is that it takes
rq->lock for all relevant rqs. This renders while (1) sys_membarrier()
in a quite effective DoS.

Now, I'm not quite charmed by all this. Esp. this patch seems wrong. The
fact is on x86 we have all the required membarriers in place.

There's a number of LOCK ins before we set rq->curr and we have them
after. Adding more, like this patch effectively does
(smp_mb__{before,after}_unlock should be a full mb as Nick pointed out)
doesn't seem like a good idea at all.

And then there's !x86 to consider.

 \ /
  Last update: 2010-02-01 10:44    [W:0.115 / U:5.020 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site