lkml.org 
[lkml]   [1998]   [Jun]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Thread implementations...
On Fri, 19 Jun 1998, David S. Miller wrote:

> I look at it this way.
>
> If you can divide the total set of fd's logically into seperate
> groups, one strictly to a particular thread. Do it this way.
> The problem with one thread polling all fd's and passing event
> notification to threads via some other mechanism has the problem that
> this one thread becomes the bottle neck.

I realize that every operation, performed indide that process/thread, if
takes any noticeable time, will hold back everything that depends on any
fd status change. But what if the code is optimized to reduce the time in
loop to the absolute minimum possible? Will poll() take more time by
itself (and indeed become a bottleneck) in one thread vs. multiple
poll()'s made at the same time in multiple threads? If the time spent in
the loop is minimal, is there any difference between waking up one of
looping threads, searching through its poll array and performing some
action, and with one thread waking up every time, searching larger array
(IMHO not a significant time compared to time spent by system while
processing those sockets) and then performing the same action, if that
action takes some insignificant time, comparable with time, spent in
buffers handling in the kernel itself? As I understand, with multiple
threads ot not, kernel still needs a time to process file descriptors
and choose thread to wake up even if threads already divided fds among
themselves, so the total amount of fd lists scanning won't change.

> The problem, for one, with web etc. servers is the incoming connection
> socket. If you could tell select/poll "hey, when a new conn comes in,
> wake up one of us", poof this issue would be solved. However the
> defined semantics for these interfaces says to wake everyone polling
> on it up.

This is why I do that in userspace -- one process is always waking up,
connection is placed in its internal queue, its fd is added to the
polling list, and after request is received and parsed asynchronously, fd
is immediately passed to another process through the AF_UNIX socket. While
main process is doing nonblocking I/O on multiple connections, there is no
I/O in the same loop except opening new connections, reading from them and
passing to other processes fds/data of connections that have sent their
requests and expect the response. Kind of userspace "multithreading",
optimized for the particular operation.

Possible problems can be caused either by poll() scalability (it will
take more time than if I did that in multiple threads simultaneously?) or
unexpectedly long time, spent reading data from sockets, or any delays in
fd passing, that I assume, should be followed by a context switch to the
receiving process that won't be unlike wake-one behavior, described by you
and Dean.

--
Alex


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu

\
 
 \ /
  Last update: 2005-03-22 13:43    [W:0.311 / U:0.400 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site