lkml.org 
[lkml]   [1998]   [Sep]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Interesting scheduling times - NOT
Larry McVoy writes:
> : As I've already said, you're probably not seeing the variance because
> : you don't run with RT priority.
>
> Been there, tried it, the results have very low variance:
>
> RT: 5.42 (5.52 5.47 5.43 5.42 5.42 5.42 5.42 5.41 5.40 5.40 5.39)
> !RT: 4.65 (4.86 4.85 4.84 4.83 4.66 4.65 4.61 4.55 4.55 4.54 4.54)
>
> With 10 background processes:
>
> RT: 11.04 (11.13 11.11 11.11 11.07 11.07 11.04 11.04 11.00 11.00 10.98 10.98)
> !RT: 6.76 (6.99 6.80 6.79 6.79 6.76 6.76 6.75 6.51 6.49 6.48 6.47)

Interesting.

> : I'm left with variance (up to 50%) in the long run queue case. I can
> : sometimes see this variance even with SCHED_OTHER. So there is still
> : some other effect going on. Again, I don't see a variance this large
> : with your test, so again there is something that my test is sensitive
> : too.
> : Using pipes and token passing doesn't change the variance, BTW.
>
> It does if you design the benchmark right. Look, you keep setting
> yourself up to take a fall. You may be right about everything else,
> but you're just dead wrong about the variance. There is no reason
> for it to be there. It's not just my benchmark that doesn't see it,
> every other variation of this benchmark is stable in the small
> process case (where they are basically simulating or calling
> sched_yield()).

There is a *reason* the variance is there. The question is what is
causing it. Glib replies like "your test is broken" don't answer the
question. There is something different about my test that leaves it
vulnerable/sensitive to some effect or set of effects, whereas yours
is not. It may indeed because my code is doing something silly, or
leaving itself open to some undesirable effect (one that is not
interesting in the context of the measurement). On the other hand, as
you've said before, it may be that I've stumbled across something.

All I am saying is "here's some numbers I've gotten, here's how I've
gotten them and I'm getting these strange variances. Here's some
possible causes, I've done this and this to reduce the variances but
still have some".

> : Instead of abusing me and my test, why not look at the code and try to
> : figure out *why* we are getting different numbers? That's how science
> : is done. If your results are different from someone else, you go and
> : figure out why, not abuse them and their work.
> : I'm looking at your code to see why it yields less variance. When the
> : answer is found, I'm sure it will be useful.
> : If I knew *why* my test is more sensitive, the problem would be
> : solved.
>
> Richard, you seem to have a bit of a complex here. I'm not
> ``abusing'' you, I'm simply holding you up to the normal standards
> of engineering. You seem to want to be a contributing member of the
> Linux kernel hacker crowd. That's cool, I have no problem with
> that. I do have a problem when you show up and tell be that
> important parts of the system should be redesigned based on what I
> know to be a flawed analysis. That leads to crappy, needlessly
> bloated systems and I consider it part of my contribution to prevent
> that wherever possible.

Maybe I have a "complex" because your first (private) message to me
started off with "What the fuck are you talking about?". And you've
been pretty agressive about the whole thing and have used emotive
words. Who has the complex?

BTW: the two suggestions I've made (reodering the task structure and
having a special RT run queue) contribute correspondingly zero and
little code bloat.

Also, consider a shared RT/timesharing system where some
dumb/malicious user launches a fork bomb or something similar. A
separate run queue would provide increased protection for critical
processes. I don't want to see 100 running processes kill my RT
latency.
The SCHED_FIFO and SCHED_OTHER scheduling policies already set RT
processes apart. They get special priority and protection from normal
processes. A separate run queue furthers that protection.

Even if the variances in my measurement turn out to be due to a broken
test or sensitivity to some "uninteresting" effect, there is no
denying that an increased run queue length slows down context switch
times. With my test I get a cost of 0.2 us per process and with yours
I get 0.19 us (PPro 180).

A more realistic target machine for our RT applications is a Pentium
100. lat_ctx gives me 17.54 us with 0 extra processes, and 33.74 us
with 10 extra (low priority) processes. That's a cost of 1.6 us for
every extra process on the run queue. With my tests I got 0.91 us.
I hope you believe the lat_ctx results. So 10 extra processes doubles
the context switch time, and hence the RT wakeup latency.

Now, I've seen our online reduction and visualisation software clock
up to 10 processes on the run queue. I didn't need to cook that
result: I just watched what was happening without telling anyone.
So, at least for us, 10 extra processes is not unrealistic.

> And, I'm sorry, but it isn't my job to figure out where you went
> wrong with your benchmark. The reason that it is your job, not my
> job, is that you are proposing that the system get changed. As
> such, it is up to you to justify the change. That's just basic
> engineering. You seem to want to be part of the kernel engineering
> crowd; this isn't how you get there.

If you're not interested in being constructive (i.e. "I think your
variance may be due to XYZ effect"), then why not get off my back and
let me get on with tracking down the problem?
If I propose a mechanism which I think may explain the variance, and
you don't agree, all you need to do is say so and why. Repeating the
mantra "your test is broken" is pointless.

Regards,

Richard....

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:44    [W:0.055 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site