lkml.org 
[lkml]   [2003]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: 2.5.65-mm4

On Sun, 23 Mar 2003, Andrew Morton wrote:

> Note that the lock_kernel() contention has been drastically reduced and
> we're now hitting semaphore contention.
>
> Running `dbench 32' on the quad Xeon, this patch took the context switch
> rate from 500/sec up to 125,000/sec.

note that there is _nothing_ wrong in doing 125,000 context switches per
sec, as long as performance increases over the lock_kernel() variant.

> I've asked Alex to put together a patch for spinlock-based locking in
> the block allocator (cut-n-paste from ext2).

sure, do this if it increases performance. But if it _decreases_
performance then it's plain pointless to do this just to avoid
context-switches. With the 2.4 scheduler i'd agree - avoid
context-switches like the plague. But context-switches are 100% localized
to the same CPU with the O(1) scheduler, they (should) cause (almost) no
scalability problem. The only thing this change will 'fix' is the
context-switch statistics.

plus someone might want to try some simple spin-sleep semaphore
implementation at this point. The context-switch takes roughly 2 usecs on
a typical x86 box, so i'd say spinning for 0.5 or 1.0 usecs could provide
some speedup.

Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:34    [W:0.046 / U:0.468 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site