Messages in this thread | | | Subject | Re: Context switch times | From | (Eric W. Biederman) | Date | 05 Oct 2001 09:15:37 -0600 |
| |
Linus Torvalds <torvalds@transmeta.com> writes:
> On Thu, 4 Oct 2001, Mike Kravetz wrote: > > > On Thu, Oct 04, 2001 at 10:42:37PM +0000, Linus Torvalds wrote: > > > Could we try to hit just two? Probably, but it doesn't really matter, > > > though: to make the lmbench scheduler benchmark go at full speed, you > > > want to limit it to _one_ CPU, which is not sensible in real-life > > > situations. > > > > Can you clarify? I agree that tuning the system for the best LMbench > > performance is not a good thing to do! However, in general on an > > 8 CPU system with only 2 'active' tasks I would think limiting the > > tasks to 2 CPUs would be desirable for cache effects. > > Yes, limiting to 2 CPU's probably gets better cache behaviour, and it > might be worth looking into why it doesn't. The CPU affinity _should_ > prioritize it down to two, but I haven't thought through your theory about > IPI latency.
I don't know what it is but I have seen this excessive cpu switching in the wild. In particular on a dual processor machine I ran 2 cpu intensive jobs, and a handful of daemons. And the cpu intensive jobs would switch cpus every couple of seconds.
I was investigating it because on the Athlon I was running on a customer was getting a super linear speed up. With one processes it would take 8 minutes, and with 2 processes one would take 8 minutes and the other would take 6 minutes. Very strange.
These processes except at their very beginning did no I/O and were pure cpu hogs until they spit out their results. Very puzzling. I can't see why we would ever want to take the cache miss penalty of switching cpus, in this case.
Eric - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |