Messages in this thread | | | From | (Larry McVoy) | Subject | Re: Interesting scheduling times - NOT | Date | Fri, 18 Sep 1998 10:51:16 -0600 |
| |
: To answer your question, I'll give you another question: just what do : you think is the breakdown of costs in scheduling? : Another way to answer this is to point out that basic scheduling costs : are of the order of microseconds. Is it then unreasonable to have : another few microseconds if the run queue has a handful more entries?
We aren't talking about a few microseconds, a few microseconds would be fine. We're talking about 12 vs 40, according to you. That's 28 usecs. OK, let's look at that. You have 12 extra processes in the equation, so the extra work - worst case - is a linked list walk of 12 links. Let's say those links aren't in any of the caches. I'll bet your machine has memory latencies of < 200ns. So the extra link walks to find the right process is 2.4 usecs. Suppose that there is another cache miss because you look at some data structure as part of the walk - that's another 2.4. To get to your 28 usecs, there would have to be 11 cache misses (in both L1 and L2) per link walked. If that's the case, my apologies and I'll go yell at Linus. But I don't think that's the case. I just walked the code to make sure and it looks like 6-7 cache misses to me: the fields referenced are (in order of reference):
has_cpu // referenced in can_schedule() policy // referenced in goodness() counter // referenced in goodness() processor // referenced in goodness() mm // referenced in goodness() priority // referenced in goodness() next_run // referenced in schedule()
processor is right next to has_cpu so if they don't cross a boundry, that's one cache line.
On top of that, consider what your test is doing: each process is call sched_yield() in a loop. As far as I could tell, you aren't doing anything that pollutes the cache. Unless there is some bug on your system that is causing the cache to get flushed on each context switch (which I can't see as possible - then the 2 process numbers would be higher), then most of this state should be cached, at least in the L2 cache. All my numbers here assume that none of the kernel scheduling data structures are in either cache, in other words: even if you were running without a data cache, I fail to see how the run queue would add that much overhead. Given that you do have a data cache, it seems that much less likely that the run queue is the problem.
An interesting test would be to go wack sched.h to put the listed fields all right next to each other, starting on a cache line boundry, in the order listed above. It's 20 bytes total, that is just 2 cache lines instead 6-7. If that made a big difference, you'd have a case to make that the run queue hurts. Even if it doesn't make a big difference, it would be a good thing - cache misses are bad and they get nastier on SMP. IRIX has been heavily wacked to put all the data you need and the lock on the same cache line for all the data structures that are commonly used. They didn't do all that work because they were bored.
: > I took the lmbench context switch test and ran it with and without : > background jobs. The background jobs were doing : > : > nice(10); : > for (;;) getppid(); : > : > With 8 processes context switching and 12 background jobs, the time : > goes from 11 usecs to 13. Much more sane. This is on a 166Mhz pentium, : > no MMX, Linux 2.0.33. : : Not having see your code, I can't comment on what you're actually : measuring.
Get lmbench from ftp.bitmover.com/lmbench, build it, create the background processes by
cat > busy.c main(){ for(;;) getppid(); } ^D cc busy.c
run the test without any background noise:
../bin/i586-linux/lat_ctx 8 8 8 8 8 8 8 8 8 8
and then load up the system and try it with the background load
for i in 1 2 3 4 5 6 7 8 9 0 1 2 do a.out & done
../bin/i586-linux/lat_ctx 8 8 8 8 8 8 8 8 8 8
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |