lkml.org 
[lkml]   [1998]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Thread implementations...


On Sat, 20 Jun 1998, Richard Gooch wrote:

> Dean Gaudet writes:
> >
> > On Fri, 19 Jun 1998, Richard Gooch wrote:
> >
> > > On the other hand you could say that the UNIX semantics are fine and
> > > are quite scalable, provided you use them sensibly. Some of these
> > > "problems" are due to applications not being properly thought out in
> > > the first place. If for example you have N threads each polling a
> > > chunk of FDs, things can run well, provided you don't have *each*
> > > thread polling *all* FDs. Of course, you want to use poll(2) rather
> > > than select(2), but other than that the point stands.
> >
> > You may not be able to exploit the parallism available in the hardware
> > unless you can "load balance" the descriptors well enough...
>
> Use 10 threads. Seems to me that would provide reasonable load
> balancing. And increasing that to 100 threads would be even better.

No it wouldn't. 100 kernel-level threads is overkill. Unless your box
can do 100 things at a time there's no benefit from giving the kernel 100
objects to schedule. 10 is a much more reasonable number, and even that
may be too high. You only need as many kernel threads as there is
parallelism to exploit in the hardware. Everything else can, and should,
happen in userland where timeslices can be maximized and context switches
minimized.

> The aim is to ensure that, statistically, most threads will remain
> sleeping for several clock ticks.

What? If I am wasting system memory for a kernel-level thread I'm not
going to go about ensuring that it remains asleep! no way. I'm going to
use each and every time slice to its fullest -- because context switches
have a non-zero cost, it may be small, but it is non-zero.

> With a bit of extra work you could even slowly migrate consistently
> active FDs to one or a few threads.

But migrating them costs you extra CPU time. That's time that strictly
speaking, which does not need to be spent. NT doesn't have to spend this
time when using completion ports (I'm sounding like a broken record).

Look at this another way. If I'm using poll() to implement something,
then I typically have a structure that describes each FD and the state it
is in. I'm always interested in whether that FD is ready for read or
write. When it is ready I'll do some processing, modify the state,
read/write something, and then do nothing with it until it is ready again.

To do this I list for the kernel all the FDs and call poll(). Then the
kernel goes around and polls everything. For many descriptors (i.e. slow
long haul internet clients) this is a complete waste. There are two
approaches I've seen to deal with this:

- don't poll everything as frequently, do complex migration between
different "pools" sorted by how active the FD is. This reduces the number
of times slow sockets are polled. This is a win, but I feel it is far too
complex (read: easy to get wrong).

- let the kernel queue an event when the FD becomes ready. So rather than
calling poll() with a list of 100s of FDs, we tell the kernel on a per-FD
basis "when this is ready for read/write queue an event on this pipe, and
could you please hand me back this void * with it? thanks". In this
model when a write() returns EWOULDBLOCK the kernel implicitly sets that
FD up as "waiting for write", similarly for a read(). This means that no
matter what speed the socket is, it won't be polled and no complex
dividing of the FDs into threads needs to be done.

The latter model is a lot like completion ports... but probably far easier
to implement. When the kernel changes an FD in a way that could cause it
to become ready for read or write it checks if it's supposed to queue an
event. If the event queue becomes full the kernel should queue one event
saying "event queue full, you'll have to recover in whatever way you find
suitable... like use poll()".

Dean




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu

\
 
 \ /
  Last update: 2005-03-22 13:43    [W:0.195 / U:0.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site