lkml.org 
[lkml]   [2001]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectContext switch times
I've been working on a rewrite of our Multi-Queue scheduler
and am using the lat_ctx program of LMbench as a benchmark.
I'm lucky enough to have access to an 8-CPU system for use
during development. One time, I 'accidently' booted the
kernel that came with the distribution installed on this
machine. That kernel level is '2.2.16-22'. The results of
running lat-ctx on this kernel when compared to 2.4.10 really
surprised me. Here is an example:

2.4.10 on 8 CPUs: lat_ctx -s 0 -r 2 results
"size=0k ovr=2.27
2 3.86

2.2.16-22 on 8 CPUS: lat_ctx -s 0 -r 2 results
"size=0k ovr=1.99
2 1.44

As you can see, the context switch times for 2.4.10 are more
than double what they were for 2.2.16-22 in this example.

Comments?

One observation I did make is that this may be related to CPU
affinity/cache warmth. If you increase the number of 'TRIPS'
to a very large number, you can run 'top' and observe per-CPU
utilization. On 2.2.16-22, the '2 task' benchmark seemed to
stay on 3 of the 8 CPUs. On 2.4.10, these 2 tasks were run
on all 8 CPUs and utilization was about the same for each CPU.

--
Mike Kravetz kravetz@us.ibm.com
IBM Peace, Love and Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:04    [W:0.153 / U:20.764 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site