lkml.org 
[lkml]   [2003]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 2.5.65-mm4
Ingo Molnar <mingo@elte.hu> wrote:
>
>
> On Sun, 23 Mar 2003, Andrew Morton wrote:
>
> > Note that the lock_kernel() contention has been drastically reduced and
> > we're now hitting semaphore contention.
> >
> > Running `dbench 32' on the quad Xeon, this patch took the context switch
> > rate from 500/sec up to 125,000/sec.
>
> note that there is _nothing_ wrong in doing 125,000 context switches per
> sec, as long as performance increases over the lock_kernel() variant.

Yes, but we also take a big hit before we even get to schedule().

Pingponging the semaphore's waitqueue lock around, doing atomic ops against
the semaphore counter, etc.

In the case of ext2 the codepath which needs to be locked is very small, and
converting it to use a per-blockgroup spinlock was a big win on the 16-way
numas, and perhaps 8-way x440's. On 4-way xeon and ppc64 the effects were
very small indeed - 1.5% on xeon, zero on ppc64.

In the case of ext3 I am suspecting lock_journal() in JBD, not lock_super()
in the ext3 block allocator. The hold times in there are much longer, so we
may have a more complex problem. But until lock_super() is cleared up it is
hard to tell.

> > I've asked Alex to put together a patch for spinlock-based locking in
> > the block allocator (cut-n-paste from ext2).
>
> sure, do this if it increases performance. But if it _decreases_
> performance then it's plain pointless to do this just to avoid
> context-switches. With the 2.4 scheduler i'd agree - avoid
> context-switches like the plague. But context-switches are 100% localized
> to the same CPU with the O(1) scheduler, they (should) cause (almost) no
> scalability problem. The only thing this change will 'fix' is the
> context-switch statistics.

The funny thing is that when this is happening we tend to clock up a lot of
idle time. But Martin tends to not share vmstat traces with us (hint) so I
don't know if it was happening this time.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:34    [W:0.059 / U:1.624 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site