lkml.org 
[lkml]   [1998]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Interesting scheduling times - NOT
Larry McVoy writes:
> Richard Gooch <rgooch@atnf.csiro.au>:
> : > There is no possible way that you would get 100% variance due to cache
> : > misses for this sort of test. (a) you are just calling sched_yield() -
> : > there is virtually nothing in the cache footprint - where's the source of
> : > the cache conflicts? (b) I'm sitting here running a 16 process context
> : > switch test over and over and I'm seeing about a 4-6% variance from run
> : > to run. How come I'm not seeing your variance?
> :
> : Cache line aliasing.
>
> Huh? I said that the cache isn't going to cause this sort of problem and
> your answer is "cache line aliasing"? If that's your claim, prove it.
> Work through the math and show us all how "cache line aliasing" can cause
> 100% variance in your benchmark but will cause 5% variance in lmbench.

No, your claim is that my testcode is flawed. I have used both pipe
and yielding techniques and I get similar variances. You claim that
because you don't see the variances and I do, that my testcode is
flawed. It doesn't work that way. Just because you don't measure it
and I do doesn't mean my test is flawed. Your testing environment may
be different than mine.
We may be measuring subtly different things, but it does not follow
that my test is flawed.
Instead of abusing me and telling me that my test is flawed, instead
look at what may be causing the variances and try to understand why we
are getting different results.

Now, here's how my benchmark can yield variance: cache effects can
result in most runs having largish switch times. However, perhaps in a
small proportion of runs cache pollution/aliasing doesn't happen
(perhaps some interrupt comes along and references the cache lines I'm
about to use). In those cases the minimum will be less than the median.
Since I do 200 000 switches (which takes several seconds), I may have
some reasonable chance of hitting that happy confluence of events
where the normal cache pollution/aliasing doesn't bite me.

It only takes one run to lower the minimum. In your test, taking the
median makes you insensitive to the effect I described. Now, unlike
you, I'm not claiming your test is flawed. What you are measuring is
subtly different. My test is more sensitive to cache/memory effects.

Unfortunately, I don't read x86 assembler, so I'm not sure how much
work is done by __switch_to() and how much state it saves/restores. It
seems to me that is the prime place where cache effects will take
their toll.

> : No, you're misrepresenting what I said. I take the minimum not because
> : it looks better, but because it the correct thing to do. The average
> : and maximum times can be polluted with other effects (interrupt
> : processing). The minimum time is more robust against these effects.
>
> That's funny: the lmbench BENCH() macro does 10 runs and takes the
> median. Even when I add about 1K interrupts/second, it still varies
> less than 10%. Why does your benchmark vary so much by comparison?
> Why doesn't lmbench vary the same amount?

See above. The minimum time has a better chance of avoiding cache
pollution/aliasing effects.

Note: in my tests, I see substantial variance mainly with the process
switching test, not the thread switching test. This is particularly
the case now that Linus posted the FPU saving fix.
On a PPro 180 I'm seeing minimum process switch times of 4.8 us to
8.5 us. That's a 77% increase. I think that variance is real, and not
an artefact of my test code.

> : Note that the true context switch time cannot be greater than the
> : minimum time measured.
>
> That's absolutely not true. Consider a bell shaped distribution of
> results. You are reporting the minimum. If it were true that the
> results consistently followed a bell distribution, your minimum
> would be extremely misrepresentitive. That's why lmbench takes the
> median, it gets the result that is squarely in the bell; if it turns
> out that normal case is near the min, that's fine, that's what the
> number will be. Carl and I did an extensive amount of statistical
> analysis to make sure that the numbers would stand up.

Perhaps I should have said "the context switch time when there are no
unfavourable cache effects cannot be greater than the minimum time
measured".

> Taking the minimum is not "the correct thing to do", it just happens
> to show more realistic numbers for your flawed benchmark.

No, again, my benchmark is not flawed. Look, you are trying to do
something different with your benchmark. Your focus is to compare
between different OSes and to see what the "normal" context switch
time is.
But that isn't my focus. I want to see where I can shave cycles. In
that case seeing cache-induced variance is good, because it can expose
problems or suggest improvements (like reordering the task structure).

> : Note that with the fix from Linus, I'm now seeing 0.2 us overhead for
> : every extra process on the run queue for a PPro 180, and 0.91 us for a
> : Pentium 100. This sounds much closer to what you're measuring.
> :
> : Again, since the fix means that FPU state isn't saved (my tests don't
> : frob the FPU), and hence less cache pollution, I think this indicates
> : my tests are valid.
> :
> : > Given that I've not seen a production workload generate a run queue depth
> : > of 24 in the last 10 years, I'm just not convinced that this is a problem.
> :
> : I'm looking at potential depths of 10. Even there, the effect is still
> : noticeable.
>
> So let's look at that. On your Pentium 100, I'll bet your context
> switch is something like 8 or 9 usecs. So let's say you have 10

I'm getting 10.1 us for a process switch and 5.7 us for a thread
switch.

> background processes and they jack up your context switch time to 18
> usecs instead of 9. In order for you to have a run queue depth of 10,
> you are going to need to be pretty heavily CPU bound - if you aren't,
> then you are blocking on I/O and your run queue isn't as long as you
> are suggesting. So each process is, on average, using its whole time
> slice of 10,000 usecs. The net effect of the run queue depth is that you
> lose 9 usecs of CPU time every 10,000 usecs. That's a slowdown of .1%
> - you're telling us that 1/10th of a percent is noticeable?
>
> And before you start arguing that your processes switch more often than
> that, let's say they only run for 1/10th of their time slice - then it
> is 1%. And if they aren't running for their whole time slice then they
> are getting taken off the run queue and there goes your argument that the
> run queue is deep. Remember, the claim all along has been that a long
> run queue will cause a problem. If you don't have that long run queue,
> we all agree there is no problem. So the issue here is that if you have
> the long run queue, the effects are less than 1/10th of a percent.

I've explained this in another recent message. The problem is how long
does it take to wake up and switch in an RT process. The worst time
will increase if you have a large run queue.

BTW: the system I mentioned earlier has just crept up to 10 processes
on the run queue (again: no RT processes on the system).

Regards,

Richard....

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:44    [W:0.052 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site