lkml.org 
[lkml]   [2004]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [BENCHMARK] 2.6.3-rc2 v 2.6.3-rc3-mm1 kernbench
Quoting Nick Piggin <piggin@cyberone.com.au>:
> Bill Davidsen wrote:
> >On Tue, 17 Feb 2004, Nick Piggin wrote:
> >>Con hasn't tried HT off AFAIK because we couldn't work out how to
> >>turn it off at boot time! :(
> >The curse of the brain-dead BIOS :-(

Err not quite; it's an X440 that is 10000 miles away so I cant really access the
bios easily :P

> >So does CONFIG_SCHED_SMT turned off mean not using more than one sibling
> >per package, or just going back to using them poorly? Yes, I should go
> >root through the code.
>
> It just goes back to treating them the same as physical CPUs.
> The option will be eventually removed.
>
> >Clearly it would be good to get one more data point with HT off in BIOS,
> >but from this data it looks as if the SMT stuff really helps little when
> >the system is very heavily loaded (Nproc>=Nsibs), and does best when the
> >load is around Nproc==Ncpu. At least as I read the data.

Actually that machine is 8 packages, 16 logical cpus and kernbench half load by
default is set to cpus / 2. The idea behind that load is to minimise wasted
idle time so this is where good tuning should show the most bonus as it does.

The really
> >interesting data would be the -j64 load without HT, using both schedulers.
>
> The biggest problems with SMT happen when 1 < Nproc < Nsibs,
> because every two processes that end up on one physical CPU
> leaves one physical CPU idle, and the non HT scheduler can't
> detect or correct this.
>
> At higher numbers of processes, you fill all virtual CPUs,
> so physical CPUs don't become idle. You can still be smarter
> about cache and migration costs though.
>
> >I just got done looking at a mail server with HT, kept the load avg 40-70
> >for a week. Speaks highly for the stability of RHEL-3.0, but I wouldn't
> >mind a little more performance for free.
>
> Not sure if they have any sort of HT aware scheduler or not.
> If they do it is probably a shared runqueues type which is
> much the same as sched domains in terms of functionality.
> I don't think it would help much here though.

I think any bonus on the optimal and max loads on kernbench is remarkable since
it's usually easy to keep all cpus busy when the load is 4xnum_cpus or higher.
The fact that sched_domains shows a decent percentage bonus even with these
loads speaks volumes about the effectiveness of this patch.

Con
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:01    [W:0.047 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site