lkml.org 
[lkml]   [2000]   [Feb]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: accept() improvements for rt signals

Hello,

On Sun, 20 Feb 2000, Dan Kegel wrote:

> Julian Anastasov wrote:
> > > > One thing we can do to deal with some of this is to have a new fcntl to
> > > > augment F_SETOWN, FASYNC and F_SETSIG. A F_SETFAST() would perform a
> > > > setsig and a setown at once, _and_ would accept a poll struct into which
> > > > it would fill the current state of the socket. ...
> >
> > Sounds good but we have to use this system call after each
> > accept() which compared with the fd ownership/flags inheritance
> > is slower.
>
> I agree, if the fd created by accept() inherits the ownership & flags
> from the listening fd, that gets rid of the race condition when an event
> comes in between accept() and F_SETFAST.
> But it has a race condition in the case where a single thread listens
> and passes connections to a bunch of nonlistening threads;
> then you'd need to do a F_SETOWN immediately after accept(),
> else an event might go to the wrong place.

But using threads I'm limited to small number of served
clients (sockets), at least in 2.2: memory, pids. I'm talking about
32768+ open file descriptors with low TCP traffic.

> If we want that case to be race-free, we might need a system call
> that atomically did accept(), F_SETOWN, FASYNC and F_SETSIG together.
> (Can anyone channel W. Richard Stevens? He was pretty good at seeing race conditions...)

May be. At least we can set socket/fd flag before
the loop with accept()s.

>
> > > Here's another possible race:
> > > User closes fd 5. Events for fd 5 are still in the signal queue.
> > > User does accept(), which returns a new fd 5.
> >
> > If we use small number of listeners (usually one) we
> > can flag that sigwaitinfo() returned info about activity on
> > the listener and to accept() the next connections after
> > ensuring the rt queue is empty. By this way we know that no new
> > fd is reused after our close() and there is no notification
> > in the queue for already closed sockets. OK, it is possible
> > the accept() to be delayed forever if the rt queue never gets
> > empty, i.e. on very busy server. We can't stop the kernel to
> > enqueue more and more events.
>
> If you empty the queue with a tight loop, and defer processing
> the events, that would let you properly ignore stale fd events
> But if there are multiple threads both listening for new connections
> and handling old ones, that might give all the pending events to the
> thread about to do an accept(). Is that good?

It is not possible to enqueue events for fd which is not
setup for rt signals. So, I don't think accept() and F_SETFAST in
2 calls is the solution. Only in one call.

>
> Also, what if there's more than one queue (e.g. a listening
> socket with one owner/signum, and connected sockets with a
> different owner/signum)? Which queue do you empty? Better
> to use an atomic accept()+F_SETFAST combo that generates an
> 'fd created' signal, then there's no need to empty any queue.

Yep. I agree.

>
> > I think in normal situations we don't need notifications
> > from accept(). These events are synchronous. Is there any reason
> > for these notifications? If the accepted socket is automatically
> > setup for rt signals we don't need notifications from accept().
> > But may be they are required if there are still events in the rt
> > queue for the closed descriptors and we must know the order of
> > the events.
>
> The accept (well, fd creation) events are there so you
> don't have to make sure the queue is empty before you call accept().
> Not only that - they make it so the user variable that tracks
> the poll state of the fd can be fed solely by events, without ever
> bypassing the signal queue. This seems like a Good Thing for
> eliminating race conditions.

Not for 2.2. Is there any expectations that si_band will be
used in 2.2?

>
> > > Your proposed new ioctl could eliminate both the race you're worried about
> > > *and* the one I just pointed out if it also delivered a signal that said
> > > "fd 5 created". The user would then discard any events for fd 5
> > > received after the close() but before the "fd 5 created" signal.
> >
> > But this problem is not only for accept(). We can create
> > non-socket descriptors and their events must be notified too.
> > This sounds very complex.
>
> e.g. sockets created by open() on special files? I suppose they
> would need an atomic open()/F_SETFAST+signal generating combo call,
> too. But I think we can leave that for later, since nobody I know is
> doing that kind of I/O at the moment.

No, I'm talking about() notifications from dup(), socket(), etc.
Normal functions which create file descriptors. But now I think this
is not a problem. We receive notifications from fds which are setup
for rt signals.

> > > That "fd 5 created" signal would tell the user code to reset its
> > > "fd 5 poll status" variable, which would then be updated by the sigio
> > > signals whenever fd 5's poll status changed. No need for an initial
> > > call to poll() or a poll struct * in F_SETFAST then.
> >
> > In fact, I performed only some tests. But I think we
> > even don't need to fallback to poll() on SIGIO.
>
> Perhaps you misunderstand - I wasn't talking about poll() fallback.
> I was pointing out that adding the "fd created" signal means
> that the signals become a complete specification of the
> poll state of the fd; there's no gap between creation and turning
> on events where the fd's poll state can change unseen.

I know. I only asked about the rt queue limit and
the requirement to poll after overflow. I think there is a
limit according to the number of the async fds and the number
of their possible events.

>
> > So, what about adding flag to let accept() to copy
> > the ownership (f_owner) and fd flags (file->f_flags) from
> > the listener to the connected socket. I don't see other races
> > if this is the only change to the current kernels.
>
> It's a good idea, but I think I pointed out a race condition
> above for server programs that want a single listener socket
> plus a bunch of worker sockets with a different owner/signum.
> The only solution for that one is an atomic
> accept()+F_SETFAST+"fd created" signal queueing call (right?).

Right. But the model with many working threads has some
limitations. What are the current limits of served sockets for
2.2 when we have up to 4090 pids, 2 (4) GB RAM, etc. I prefer to
patch the kernel with many file descriptors per process and not
to use threads. It depends on the service: httpd, ircd, etc. May
be I have to switch to 2.3 for more threads?

>
> > The other
> > problem is how to ensure that the events are not for already
> > closed descriptors. May be we can play with different signal
> > numbers, i.e. we have to check if the signum in the event
> > is the expected signum for the descriptor. If not, this is
> > event for old fd. We need min 2 signal numbers for this.
> > We have to remember which was the last signum used for
> > this fd and to set another one using F_SETFAST (if added in
> > the kernel). May be there are other better solutions.
>
> I don't like the swapping between two signal numbers idea,
> simply because it could be easy to get confused which one was
> which...

Yep. I don't like workarounds too. So, I agree to:

- setup the listener for rt signals and to duplicate its flags
and ownership

- accept() to enqueue event for connected socket

- 2.2: si_band information to distinguish the events

Noone can help to multithreaded server to redirect the
events for the connected socket to another thread. May be a new
accept() call? Or the listening thread have to serve these
events before redirecting them to another thread. This event
can occur with very fast network and very slow CPU :)


Regards

--
Julian Anastasov <uli@linux.tu-varna.acad.bg>


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:56    [W:0.066 / U:0.916 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site