lkml.org 
[lkml]   [2000]   [Jan]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: On optimising the scheduler for large run queues
Date
On Sat, 29 Jan 2000, Jamie Lokier wrote:
> I will summarise.
>
> Heavyweight kernel developers (no that doesn't mean >40 years old :-)
> believe that, for all well designed real applications, scheduler
> overhead is dominated by cache overhead. Therefore optimising cache
> overhead takes priority.
>
> I agree. Though I personally do not have figures to support it, I have
> the impression that others do.

Impression right, 100 % agree.

> Adding code to the scheduler is not worthwhile unless there is a
> demonstrable benefit in a real application.
>
> It is difficult to demonstrate. Everyone is designing their
> applications around the assumption that scheduler overhead is
> significant and should be avoided. Some even use user-space scheduling.

As my doubt ( question marks ) has shown, I can agree here.

> Multi-threaded I/O can be reduced to select() I/O.
>

Or better to thread pooling + smaller select set,
or better thread pooling + Overlapped I/O ( start ... )

>
> Real applications that are fully optimised for performance do not have
> large run queues.
>
> Apparently this is true.

Agree. But apps with long RQ exist, We can say that is due to a bad design
and I agree even here, but We must be prepared to fail here.
We can say, "OK this apps are so rare that We can avoid to bloat our kernel
with code that helps bad design.", and I can agree, but We must keep in mind
that this apps are tipical in great Corporates that tipically has XXXX K lines
of code written in that way.
Now I'm a guy that likes perfect code even from the spacing and indentation
point of view, and I like to go up and down onto the code I manage to cut,
clear and redesign the code, but have You ever tried to the business mans
We all have upon us, We need XX man month work to "redesign" our code ?
This can be the reaction :

@#@#!|@]@*#~``?#-

And I think We all like when We read on SlashDot :

"Corporate XXXX has adopted Linux in its XXXX installations !"


> Therefore optimising the scheduler for large run queues, at a cost for
> small run queues (in maintenance, footprint and overhead) is
> counterproductive for the most critical cases.
>
> Here is a logical reasoning error.[3] By the kernel heavyweights.

Agree. We must only know that We can miss something.

> The overhead of a scheduler changes is *only* relevant to high switching
> rate applications. And that doesn't include purely select() based
> single-threaded monolithic servers.

Agree.

> No-one has shown any real, well designed, well tuned, short run queue
> applications that have a high switching rate!
>
> They certainly haven't used such examples in their arguments! They've
> used other examples. That's why it's a reasoning error: Those examples
> aren't relevant!

I've had examples of this systems, that reach for several minutes / day more
than 30 processes in RQ ( ftp servers + apache + cgi ). And not all system
admins on earth are on linux-kernel.
I can agree to say "Redesign Your network" to these guys, but while I say I
hope They don't say me "I'll redesign ( read change ) my system.".

> Now, I expect there are examples of real, well designed, well tuned
> applications that switch very often.
>
> Until someone demonstrates that *those* applications have small run
> queues, and only those, then we have to consider the large run queue
> patches seriously. Remember the other applications, including
> single-threaded, SIGIO-optimised, mmapping hyper-tuned servers, are
> unaffected by scheduler changes.

Agree.

> This is because loaded hyper-tuned servers don't schedule at all.
> And under partial load, the schedule to and from idle isn't important.
> It is absorbed.
>
> All the above means that the argument over whether to handle large run
> queues properly has not been properly settled. The answers have not
> answered the questions.

Final agree.

Cheers,
Davide.

--
All this stuff is IMVHO




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:56    [W:0.061 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site