lkml.org 
[lkml]   [1999]   [Apr]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: CPU affinity
Rogier Wolff writes:
> Mathias Froehlich wrote:
> >
> > > On Wed, 14 Apr 1999, Rik van Riel wrote:
> > >
> > > > I think it won't affect the code path or the scheduling
> > > > behaviour very much, except for the fact that it'll cause
> > > > processes to stick to their CPU much better.
> >
> > > 'much better'?. Do you have any testcase that shows that we do not stick
> > > to CPUs well enough? I actually have recorded traces of full kernel builds
> > > and the like, and the number of 'bad' CPU switches was zero. I'm getting
> > > slightly tired of claims like the above, this isnt a Microsoft trade show
> > > after all ...
> >
> > Yes I have a testcase. I usually use xosview to visualize the CPU
> > usage on my dual PII system. When I start a pure numbercrunching job,
> > I can see how the numbercruncher is sheduled out by xosview by any
> > screen update of xosview. The kernel sees then an idle CPU and
> > schedules the numbercruncher onto the other CPU. This looks like a
> > __perfect__ "numbercruncing ping pong".
>
> Suppose: We have a numbercrunching application running mostly on CPU1.
> Cache-1 is "hot", cache-2 is "cold". CPU1 is currently executing some
> triviality, CPU2 is idle. And suppose we have 1M cache on the CPUs.
>
> So suppose we schedule the number crunching application onto CPU2.
>
> DIMMs are 64 bits right? Do they run in something like 6-1-1-1 timing?
> So we get 32 bytes into the cache in 100 ns (rounding a bit). This is
> a 100ns stall, and wether the application "heats up the cache" all at
> once after starting, or wether it happens over a (much) longer period
> of time doesn't matter: in the end it will have waited
>
> 1M bytes cache size / 32 bytes/cache-line-fetch * 100ns = 3.3ms.
>
> for the cache to heat up.
>
> So, idling the other processor would be advantageous if the
> "triviality" takes less than about 3ms.
>
> To get better than just guessing, you would have to keep a cache of
> "average running time" based on for example the name of the process.
>
> So a shell, who just does:
>
> read (1, buf, 1024);
> if (!fork ())
> exec (buf);
> else
> wait ();
>
> will most likely have an average running time less than 3ms. So
> pushing the number cruncher over is not a good idea. But after say
> one more time-slot, (10ms running time), the odds are against that.

It looks to me that the underlying problem is that xosview steals the
CPU that the number cruncher is running on, instead of using the other
CPU (which I presume is idle). That shouldn't happen. If it does, it's
a bug.

Regards,

Richard....

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:51    [W:0.042 / U:0.580 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site