lkml.org 
[lkml]   [1999]   [Jun]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Some very thought-provoking ideas about OS architecture.

> Sender: owner-linux-kernel@vger.rutgers.edu
> From: "Jeffrey B. Siegal" <jbs@quiotix.com>
> Date: Mon, 21 Jun 1999 11:07:41 -0700
> To: linux-kernel@vger.rutgers.edu
> Subject: Re: Some very thought-provoking ideas about OS architecture.
> -----
> > Batching messages between processes can be good, but what percentage of the
> > time is this practical?

How often it is practical is misleading, if you look at only existing
interfaces without redesign. I've never seen a good study on on what
could be done given equivalent, but different interfaces.

> > In most robust applications you tend to need the
> > result of one step before you are sure what the next step is, due to errors,
> > etc. There are obvious exceptions. I could initiate a bunch of writes to
> > different places, and look at all the results in one go.



Not always: and the question is robust to what? How much it is practical
often turns out to be a strong function of interface design, and
application. You have significant control over the outcome, at least
when you make interface design decisions.

If my connection to the X server fails, having checked for a write error
on every library call isn't going to change the outcome: I've got a failure
on my hands that is going to be very hard to recover from, and finding
out somewhat sooner isn't a significant help to that application. The
application has a mess on its hands in either case, and even when you
are on the same machine as the X server, you can't afford the overhead
of a Round Trip (RTT) between the application and the server.

Another point I heard John Osterhaut recently make (though I disagreed
with much of his Usenix talk) I agree with is that alot of operations
that are "errors" really aren't; avoid such pseudo-errors in your
interface design. In the X context, we made windows not existing an
"error", when it many applications it is not an error (e.g. window managers).
I now believe this was a mistake.

C library STDIO buffering was introduced exactly since you can't
afford even one system call/operation, for that interface.

>
> To work this requires the interface be designed to avoid the need for
> round trips (returned results). For example, X11 protocol is designed
> to need relatively few results. Rather than returning, say, a window
> ID, the client passes a window ID to the server which uses that ID to
> identify the window in subsequent requests. So rather than:
>
> CreateWindow() ->
>
> <- WindowID
>
> OperateOnWindow(WID) ->
>
> You have
>
> CreateWindow(WID) ->
> OperateOnWindow(WID) ->
>
> In this case, CreateWindow and OperateOnWindow can clearly be batched
> into a single message. Failures are reported back asynchronously (the
> equivalent in a system call context would be a signal); on the rare
> occasion that the client really needs to know whether an operation
> succeeded, it can do so by forcing a round trip. Generally, it doesn't
> matter and this overhead is avoided.
>
> Without redesigning the Unix-derived system call interface to use
> similar techniques, I don't see how this would yield any significant
> benefit.

Yup. The CreateWindow optimization was explicitly done during the X11
redesign of X. But we found we didn't go far enough in avoiding round
trips (e.g. my InternAtom example in my last message). On the wire, however,
they are still two messages in the X protocol stream; it is just we bundle
them into one TCP segment (typically); the X server reads as much as possible
all at once and processes them.

The best time to enable efficient batching/pipelining, etc, is in the
initial systems design. For many operations, X ran faster over a network
than locally (after all, you are getting two systems working on the same
problem; I don't know if this is as true today, as the relative costs
of CPU vs. network have been changing. I should perform some experiments
on a 100megabit network...)

One place I disagree (in a sense) with Linus in this thread is where he says:


> - Most operations are going to be local. Any operating system that
> starts out from the notion that most operations are going to be
> remote is going to die off as computers get more and more powerful.
>
> Things may start off distributed, but in the end network bandwidth is
> always going to be more expensive than CPU power.
>

It isn't that these statements are not true: they are...

But if you don't avoid round trips in interface design, many operations
you might want to be able to run remotely may perform like dogs. Avoiding
round trips affects initial systems design, as it is often difficult to
impossible to retrofit in later (I know, we retrofit it into HTTP recently,
and it is much harder to retrofit later than doing it up front). Getting
round trips back is usually much harder than almost anything else, and
we now live in a world wide network where speed of light isn't going to
change any time soon, even optimistically.

And I wish we'd designed a tighter wire protocol for X, which would go
exactly toward Linus's second point above. X is in most environments
more limited today more by available network bandwidth or RTT latency
than by CPU speed; this can even be true on the same machine today, as
memory gets further and further away from the processor.

Then again, I'd characterize our design center in the 1980's as the campus
or metropolitan scale network and 1-3 mip machines; the Internet existed,
but shall we say that the Internet wasn't seriously useable for X traffic
(or much of anything else) in 1988 during the congestion collapse then,
so I don't feel bad about it.... All 20-20 hindsight, of course.


- Jim

P.S. Pet peeve alert!

My major complaint of people who think memory is free is that those
people don't generally understand that touching less memory will make their
systems run alot faster; and this is the technological trend I don't
see changes in anytime soon... Here's a concrete set of examples:

o The X server got much faster when the data structures were shrunk by
60% 8 or so years ago (between X11R2 and X11R3 and X11R4) (trading a few
instructions for much more compact data structure representation).

o Keith Packard has recently seen similar effects when he rewrote the frame
buffer code recently (this time, it is shrinking the total # of instructions
by an order of magnitude in the frame buffer code. This is feasible,
since memory is now so much further away from the processor that it now
pays to pull instructions back into the inner loop, rather than
unrolling them into independent loops! The code is I/O bus bound,
even though it is executing many more instructions (than were available
in 1988-1990, when the original frame buffer code was developed and
optimized), with a drastic reduction of code space and complexity.
(Coming soon to X servers near you: you'll get about .5 megabytes of
memory back; it is the code I was running on my Itsy at LinuxExpo).


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:52    [W:0.266 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site