lkml.org 
[lkml]   [2007]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectTCP_DEFER_ACCEPT issues
I am trying to use TCP_DEFER_ACCEPT in my web server.

There are some operational problems. First of all: timeout handling. I
would like to be able to set a timeout in seconds (or better:
milliseconds) for how long the socket is allowed to sit there without
data coming in. For high load situations, I have been enforcing
timeouts in the range of 15 seconds, otherwise someone can DoS the
server by opening a lot of connections and tying up data structures.

It is still possible, of course, to tie up kernel memory this way, by
not reacting to the FIN or RST packets and running into a timeout there,
too, but that is partially tunable via sysctl.

According to tcp(7) the int argument to TCP_DEFER_ACCEPT is in seconds.
In the kernel code, it's converted to TCP timeout units. When I ran my
server, and connected without sending any data, nothing happened. No
timeout. Minutes later, the connection was still there. Even worse:
when I killed (!) the server process (thus closing the server socket),
the client did not get a reset. Only when I type something in the
telnet, I get a reset. This appears to be very broken.

My suggestion:

1. make the argument to the setsockopt be in seconds, or milliseconds.
2. if the server socket is closed, reset all pending connections.

Comments?

Felix
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-11-02 02:35    [W:0.069 / U:0.828 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site