lkml.org 
[lkml]   [1998]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Interesting scheduling times - NOT
Larry McVoy writes:
> Richard Gooch <rgooch@atnf.csiro.au>:
> : OK, first let me point out that I'm getting wildy variable times.
> : On sucessive runs of the code (no other changes made, no-one else on
> : the system, no X running), I got 8.2 us and 17.5 us with no extra
> : processes.
>
> That's not a Linux problem, that's your benchmark design. The
> lmbench ctx switch test case varies about 5% when I'm wildly moving
> the mouse in X windows and running the benchmark in a while true;
> loop. You are getting more than a 100% variance in your benchmark.
> What possible valid conclusion could you draw from those results?

No, the benchmark design is correct. Look at it closely and show me
how it's flawed. I've checked the way the scheduler works and I'm
convinced I know just what I'm measuring.

Note that with 2.1.122 plus the fix Linus posted for FPU state saving,
the variance has decreased.

> [ comments about the variance being due to cache conflicts deleted ]

Deleted, but relevant. The improvements due to the FPU state saving
bugfix support this: less is now being done with memory.

> There is no possible way that you would get 100% variance due to cache
> misses for this sort of test. (a) you are just calling sched_yield() -
> there is virtually nothing in the cache footprint - where's the source of
> the cache conflicts? (b) I'm sitting here running a 16 process context
> switch test over and over and I'm seeing about a 4-6% variance from run
> to run. How come I'm not seeing your variance?

Cache line aliasing.

> [ stuff about using the minimum deleted ]
>
> : OK, now lets look at the minimum latency when running with 12 extra
> : processes: I get 38.8 us, 32.9 us an 43.5 us and 38.9 us on successive
> : runs.
> : So it's fair to say that the time goes from 6.2 us (2 processes on the
> : run queue) to 32.9 us (14 processes on the run queue).
>
> No, it is absolutely not fair to say that. Pretend you are reviewing
> a paper and somebody submitted a paper that said "I'm not really sure
> what this is doing, my results vared too much, I'm not sure why, so I
> took just the mins because that didn't vary as much, and I think that
> looking at the mins means XYZ". What would your review say?

No, you're misrepresenting what I said. I take the minimum not because
it looks better, but because it the correct thing to do. The average
and maximum times can be polluted with other effects (interrupt
processing). The minimum time is more robust against these effects.
Note that the true context switch time cannot be greater than the
minimum time measured.

> : I've looked at your code and it does similar things to mine (when I
> : use mine in -pipe mode). Your benchmark has the added overhead of the
> : pipe code. Using sched_yield() gets me closer to the true scheduling
> : overhead.
>
> You didn't look close enough - it carefully factors out everything except
> the context switch - and "everything" includes the pipe overhead.

OK, yeah, sorry, I see where you're doing that.

> All that said, I'm not saying you haven't stumbled onto a problem.
> You may have and you may not have, we just can't tell from your
> test. I can say, however, that your claims of the runq being a
> problem are way overblown. My tests show a pretty smooth linear
> increase of 333ns per extra background process on a 166Mhz pentium.
> I think you claimed that just having 2 or 3 background processes
> would double the context switch time on a similar machine: that
> certainly isn't even close to true. I kept piling them on and
> finally doubled the two process case with 24 background processes
> (went from around 5 usecs to 11).

Note that with the fix from Linus, I'm now seeing 0.2 us overhead for
every extra process on the run queue for a PPro 180, and 0.91 us for a
Pentium 100. This sounds much closer to what you're measuring.

Again, since the fix means that FPU state isn't saved (my tests don't
frob the FPU), and hence less cache pollution, I think this indicates
my tests are valid.

> Given that I've not seen a production workload generate a run queue depth
> of 24 in the last 10 years, I'm just not convinced that this is a problem.

I'm looking at potential depths of 10. Even there, the effect is still
noticeable.

Regards,

Richard....

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:44    [W:0.034 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site