lkml.org 
[lkml]   [1999]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: kernel thread support - LWP's
Date
From
: > I'll warn you up front that I've chewed over this topic at length with
: > people like Steve Kleiman, the architect of the the Solaris threading
: > model and the guy that taught me much of what I know about operating
: > systems, and even he isn't convinced that Solaris model is worth it.
: > If the clone() model had been around, I'm 90% sure he would have gone
: > with that.
:
: Do you have any inklings as to why they'd have gone this way ?

Yes. It's a much nicer model. If you've worked in both, the Linux (aka
Plan 9) model is trivial - you already know how to do everything. The only
new parts are the parts that are actually new - such as locking to prevent
data modification races. In the LWP world, you now get to learn about
two kinds of processes, you have to invent new tools to monitor, debug,
and administer these processes, you have to learn about thread scheduling
versus process scheduling, thread signals versus process signals, etc.

It's all a crock of doo doo that was deemed to necessary because
"context switches cost too much".

First of all, it's extremely rare that anyone who says "context switches
cost too much" actually knows what they cost, or knows if they are even
within an order of magnitude of being a critical performance bottleneck.

Second, for those rare cases where they actually do cost too much,
that's only on crappy operating systems. The last time I checked, Linux
process context switches were faster than Solaris LWP context switches.
So much for that argument.

Third, the context switch time on any half way decent OS quickly becomes
dominated by cache misses caused by rebuilding the real process context.
Context switch benchmarks typically (lmbench is a notable exception,
shameless plug) do not measure anything but the OS context switch -
i.e., how fast can I switch from one thread of execution to the next.
If you get this code path small enough, you can fit the whole thing in
the L1 cache like Linux does and get ~1 microsecond context switches.
And the user level scheduler people just love to brag about numbers that
are even better (though typically not much). But that's just benchmarking
crap - people don't context switch to do nothing, they context switch to
do some work. That work turns into cache misses. If you miss 5 or 6
times, you're up to the Linux kernel level context. If you are talking
about those wonderful userlevel schedulers, you are probably at the context
switch time in 2-3 cache misses. Do you really think you are going to
context switch in and do less than a half dozen cache missed before you
go to the next thread? I doubt it. If I'm right, then the real context
switch time is actually dominated by rebuilding your cache foot print,
NOT by the context switch time: regardless of whether we are talking about
kernel level, user level, whatever. So it's a marketing argument that
the context switches are the issue, not a real argument.

Srk, in private conversations with me, admitted that threads were heavily
oversold and 99% of what you wanted to do you could do with processes and
mmap() and of the remaining 1% could be done with clones.

: > So do you have any supporting data which makes a case that LWP's would
: > be better than the current model? I'm willing to believe there is such
: > data, but at this point I'm at a loss as to what it could be.
:
: Hmmm. I can't claim to have any better exposure than yourself, I was
: asking more from a consumer point of view.
:
: My perspective on LWP's is from a Solaris background so your comments are
: very interesting, indeed, given the respect the Solaris model has earned.

The Solaris model is fine except that it doesn't need to exist. They could
support clone() just fine if they wanted to.

: I think the differences I see aren't so much technical as the way they're
: interacted with (and perceived by someone who's been around Unix too long).
: For example, I'd expect a ps (or ls in /proc) to list LWP's as separate
: processes (I assume), when it ought not to. The impression of different
: pid's is of different processes, which the clone model breaks.

I personally think the other way is busted. I think the Linux way
is correct. Suppose you have a N thread program, done with LWP's on
Solaris and clones on Linux. You want to see if it is unbalanced on
an N cpu system. On Linux, you'd use the same tools that you'd use if
it were a bunch of cooperating processes (because it is), i.e., you'd
fire up top(1) and look at 'em. On Solaris, if you did the same thing,
all you'd see is one busy process but you couldn't tell (using standard
tools) which thread was the busy one and which ones were hanging around
doing nothing.

I'm sure other people will argue that they don't want their top listings
or ps listings ``cluttered up'' with all those threads. I don't agree.
That attitude says that threads are really light, it's OK to have so
many that listing them all would be too busy. BS. Those threads cost
and cost big time. Every thread is, at the very least, a 1 or 2 page
stack, typically 8 to 16KB. You got a 1000 threads? Cool, that's 16MB
of stacks. Great idea. Not.

I said a long time ago: ``Threads are like salt. You like salt, I
like salt, but we eat a lot more pasta than salt.'' The thread guys
are trying to tell you that diet of salt is a good idea. They are wrong,
don't listen, eat more pasta and be happy.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:52    [W:0.092 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site