lkml.org 
[lkml]   [2002]   [Oct]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] NUMA scheduler 1/2
From
Date
On Fri, 2002-10-25 at 10:37, Erich Focht wrote:
> Here come the rediffed (for 2.5.44) patches for my version of the
> NUMA scheduler extensions. I'm only sending the first two parts of
> the complete set of 5 patches (which make the node affine NUMA scheduler
> with dynamic homenode selection). The two patches lead to a pooling
> NUMA scheduler with initial load balancing at exec().
>

These patches produced a kernel that built and booted first try for
me. Thanks. I ran kernbench and your numa_test (schedbench) on
this numa scheduler (erich44), my simple numa scheduler (hbaum44), and
a stock kernel (stock44).

Kernbench:
Elapsed User System CPU
stock44 21.08s 196.80s 58.14s 1208.8%
hbaum44 20.49s 192.57s 50.32s 1184.8%
erich44 21.01s 193.47s 56.71s 1191.0%

Schedbench 4:
Elapsed TotalUser TotalSys AvgUser
stock44 39.47 49.99 157.94 0.96
hbaum44 38.43 48.76 153.77 1.12
erich44 24.28 36.10 97.15 0.79

Schedbench 8:
Elapsed TotalUser TotalSys AvgUser
stock44 49.46 71.07 395.77 1.92
hbaum44 37.52 57.99 300.25 2.17
erich44 30.67 47.93 245.48 2.59

Schedbench 16:
Elapsed TotalUser TotalSys AvgUser
stock44 64.17 81.48 1026.94 6.41
hbaum44 52.23 73.23 835.81 5.18
erich44 52.25 61.20 836.12 4.69

Schedbench 32:
Elapsed TotalUser TotalSys AvgUser
stock44 72.45 165.86 2318.84 12.78
hbaum44 56.74 137.58 1816.17 8.81
erich44 55.98 121.19 1791.58 9.35

Schedbench 64:
Elapsed TotalUser TotalSys AvgUser
stock44 110.31 461.29 7060.60 26.02
hbaum44 58.30 255.90 3732.08 20.10
erich44 56.94 237.09 3644.95 21.26

The results seem fairly consistent with what we have been seeing all
along. Erich's scheduler tends to be about the same as stock on
kernbench, while mine is roughly 5% better.

On schedbench Erich's does better on small loads, but as
the load increases to one task per cpu it becomes a dead heat between
the two.

It is probably worth noting that my scheduler change is a bit smaller
with 146 insertions, 27 deletions across 3 files, versus 432 insertions,
127 deletions across 4 files. But that should be expected, as my goal
was to keep the changes as small as possible, while still providing
measureable performance gains.
--

Michael Hohnbaum 503-578-5486
hohnbaum@us.ibm.com T/L 775-5486

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:30    [W:0.042 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site