lkml.org 
[lkml]   [1998]   [Sep]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Interesting scheduling times - NOT
Larry McVoy writes:
> : To answer your question, I'll give you another question: just what do
> : you think is the breakdown of costs in scheduling?
> : Another way to answer this is to point out that basic scheduling costs
> : are of the order of microseconds. Is it then unreasonable to have
> : another few microseconds if the run queue has a handful more entries?
>
> We aren't talking about a few microseconds, a few microseconds would be
> fine. We're talking about 12 vs 40, according to you. That's 28 usecs.
> OK, let's look at that. You have 12 extra processes in the equation,
> so the extra work - worst case - is a linked list walk of 12 links.
> Let's say those links aren't in any of the caches. I'll bet your machine
> has memory latencies of < 200ns. So the extra link walks to find the
> right process is 2.4 usecs. Suppose that there is another cache miss
> because you look at some data structure as part of the walk - that's
> another 2.4. To get to your 28 usecs, there would have to be 11 cache
> misses (in both L1 and L2) per link walked. If that's the case, my
> apologies and I'll go yell at Linus. But I don't think that's the case.
> I just walked the code to make sure and it looks like 6-7 cache misses
> to me: the fields referenced are (in order of reference):

OK, first let me point out that I'm getting wildy variable times. In
my code I time how long it takes to call sched_yield() 10
times. Because sched_yield() moves the process to the end of the run
queue, the other yielding process has to go through a corresponding 10
sched_yield() calls before the timing code finishes. That is a total
of 20 context switches. This is my basic measurement, which I hope
you'll agree is pretty simple. Note I'm using the SCHED_FIFO RT class.
I then do this measurement 10 000 times and display the average
scheduling time.

On sucessive runs of the code (no other changes made, no-one else on
the system, no X running), I got 8.2 us and 17.5 us with no extra
processes.
Another pair of successive runs, this time with 12 extra low-priority
processes, I got 35.2 us and 51.0 us. Pentium 100 (this is an old
Intel Neptune board).

I've now modified my test code to compute minimum and maximum
latencies (no extra processes):
Minimum scheduling latency: 6.2 us
Average scheduling latency: 15.0 us
Maximum scheduling latency: 40.5 us
and on the very next run:
Minimum scheduling latency: 6.2 us
Average scheduling latency: 8.6 us
Maximum scheduling latency: 38.0 us
and a few runs later:
Minimum scheduling latency: 7.7 us
Average scheduling latency: 11.8 us
Maximum scheduling latency: 145.9 us

Looking at the first pair of results, we can see that the minimum
scheduling time is relatively stable and the maximum isn't too bad
either. But the average does indeed change a lot. This must mean that
the distribution of scheduling times has changed from run to run, and
I suspect memory subsystem/cache interactions. Another effect could be
the servicing of interrupts, although 10 000 iterations of the main
loop should reduce the effect on the average value.
Nevertheless, it's better to look at the minimum scheduling time,
which is what I'll quote from now on.

OK, now lets look at the minimum latency when running with 12 extra
processes: I get 38.8 us, 32.9 us an 43.5 us and 38.9 us on successive
runs.
So it's fair to say that the time goes from 6.2 us (2 processes on the
run queue) to 32.9 us (14 processes on the run queue). The cost
appears to be 2.2 us per process on the run queue.

> has_cpu // referenced in can_schedule()

I've done my tests on UP kernels so you can ignore this one.

> policy // referenced in goodness()
> counter // referenced in goodness()
> processor // referenced in goodness()

Ditto for processor, I think.

> mm // referenced in goodness()
> priority // referenced in goodness()
> next_run // referenced in schedule()
>
> processor is right next to has_cpu so if they don't cross a boundry,
> that's one cache line.
>
> On top of that, consider what your test is doing: each process is
> call sched_yield() in a loop. As far as I could tell, you aren't
> doing anything that pollutes the cache. Unless there is some bug on
> your system that is causing the cache to get flushed on each context
> switch (which I can't see as possible - then the 2 process numbers
> would be higher), then most of this state should be cached, at least
> in the L2 cache. All my numbers here assume that none of the kernel
> scheduling data structures are in either cache, in other words: even
> if you were running without a data cache, I fail to see how the run
> queue would add that much overhead. Given that you do have a data
> cache, it seems that much less likely that the run queue is the
> problem.

Another datapoint: I get 2.1 us on a Pentium/MMX 200 (SDRAM) with no
extra processes. I get 8.1 us (best case) with 12 extra processes. So
on this system we're down to 0.5 us per process.
On a PPro 180 (EDO) I go from 4.5 us to 9.8 us, a cost of 0.44 us per
process.

Now, I don't know for sure what is the cause of the variability, but I
suspect cache problems, and given that, the run queue costs don't seem
unreasonable.

> An interesting test would be to go wack sched.h to put the listed
> fields all right next to each other, starting on a cache line
> boundry, in the order listed above. It's 20 bytes total, that is
> just 2 cache lines instead 6-7. If that made a big difference,
> you'd have a case to make that the run queue hurts. Even if it
> doesn't make a big difference, it would be a good thing - cache
> misses are bad and they get nastier on SMP. IRIX has been heavily
> wacked to put all the data you need and the lock on the same cache
> line for all the data structures that are commonly used. They
> didn't do all that work because they were bored.

Yes, this would be a good test. There are some dependencies on the
ordering, however, so one would need to be careful.

> : > I took the lmbench context switch test and ran it with and without
> : > background jobs. The background jobs were doing
> : >
> : > nice(10);
> : > for (;;) getppid();
> : >
> : > With 8 processes context switching and 12 background jobs, the time
> : > goes from 11 usecs to 13. Much more sane. This is on a 166Mhz pentium,
> : > no MMX, Linux 2.0.33.
> :
> : Not having see your code, I can't comment on what you're actually
> : measuring.

See: http://www.atnf.csiro.au/~rgooch/benchmarks.html

I've looked at your code and it does similar things to mine (when I
use mine in -pipe mode). Your benchmark has the added overhead of the
pipe code. Using sched_yield() gets me closer to the true scheduling
overhead.

> Get lmbench from ftp.bitmover.com/lmbench, build it, create the background
> processes by
>
> cat > busy.c
> main(){ for(;;) getppid(); }
> ^D
> cc busy.c

I don't know why you suggest using getppid(): it means the background
processes will spend most of their time in the kernel and hence will
not be preemptable.

Regards,

Richard....

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:44    [W:0.158 / U:1.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site