lkml.org 
[lkml]   [2008]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH]: Fix SMP-reordering race in mark_buffer_dirty
On Wed, 2 Apr 2008, Linus Torvalds wrote:

> On Wed, 2 Apr 2008, Mikulas Patocka wrote:
> > + /*
> > + * Make sure that the test for buffer_dirty(bh) is not reordered with
> > + * previous modifications to the buffer data.
> > + * -- mikulas
> > + */
> > + smp_mb();
> > WARN_ON_ONCE(!buffer_uptodate(bh));
> > if (!buffer_dirty(bh) && !test_set_buffer_dirty(bh))
>
> At that point, the better patch is to just *remove* the buffer_dirty()
> test, and rely on the stronger ordering requirements of
> test_set_buffer_dirty().
>
> The whole - and only - point of the buffer_dirty() check was to avoid the
> more expensive test_set_buffer_dirty() call, but it's only more expensive
> because of the barrier semantics. So if you add a barrier, the point goes
> away and you should instead remove the optimization.
>
> (I also seriously doubt you can actually trigger this in real life, but
> simplifying the code is probably fine regardless).
>
> Linus

I measured it:
On Core2, mfence is faster (8 ticks) than lock btsl (27 ticks)
On Pentium-4-prescott, mfence is 124 ticks and lock btsl is 86 ticks.
On Pentium-4-pre-prescott, mfence wins again (100) and lock btsl is 120.
On Athlon, mfence is 16 ticks and lock btsl is 19 ticks.

So you're right, the gain of mfence is so little that you can remove it
and use only test_set_buffer_dirty.

I don't know if there are other architectures where smb_mb() would be
significantly faster than test_and_set_bit.

Mikulas


\
 
 \ /
  Last update: 2008-04-02 23:05    [W:0.116 / U:0.808 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site