lkml.org 
[lkml]   [2020]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Date
Subject[PATCH 3.16 20/63] tcp/dccp: drop SYN packets if accept queue is full
3.16.81-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Eric Dumazet <edumazet@google.com>

commit 5ea8ea2cb7f1d0db15762c9b0bb9e7330425a071 upstream.

Per listen(fd, backlog) rules, there is really no point accepting a SYN,
sending a SYNACK, and dropping the following ACK packet if accept queue
is full, because application is not draining accept queue fast enough.

This behavior is fooling TCP clients that believe they established a
flow, while there is nothing at server side. They might then send about
10 MSS (if using IW10) that will be dropped anyway while server is under
stress.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
[bwh: Backported to 3.16: Apply TCP changes in both tcp_ipv4.c and tcp_ipv6.c]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -287,11 +287,6 @@ static inline int inet_csk_reqsk_queue_l
return reqsk_queue_len(&inet_csk(sk)->icsk_accept_queue);
}

-static inline int inet_csk_reqsk_queue_young(const struct sock *sk)
-{
- return reqsk_queue_len_young(&inet_csk(sk)->icsk_accept_queue);
-}
-
static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk)
{
return reqsk_queue_is_full(&inet_csk(sk)->icsk_accept_queue);
--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -621,13 +621,7 @@ int dccp_v4_conn_request(struct sock *sk
if (inet_csk_reqsk_queue_is_full(sk))
goto drop;

- /*
- * Accept backlog is full. If we have already queued enough
- * of warm entries in syn queue, drop request. It is better than
- * clogging syn queue with openreqs with exponentially increasing
- * timeout.
- */
- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1)
+ if (sk_acceptq_is_full(sk))
goto drop;

req = inet_reqsk_alloc(&dccp_request_sock_ops);
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -391,7 +391,7 @@ static int dccp_v6_conn_request(struct s
if (inet_csk_reqsk_queue_is_full(sk))
goto drop;

- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1)
+ if (sk_acceptq_is_full(sk))
goto drop;

req = inet6_reqsk_alloc(&dccp6_request_sock_ops);
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1292,12 +1292,7 @@ int tcp_v4_conn_request(struct sock *sk,
goto drop;
}

- /* Accept backlog is full. If we have already queued enough
- * of warm entries in syn queue, drop request. It is better than
- * clogging syn queue with openreqs with exponentially increasing
- * timeout.
- */
- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
+ if (sk_acceptq_is_full(sk)) {
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
goto drop;
}
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1009,7 +1009,7 @@ static int tcp_v6_conn_request(struct so
goto drop;
}

- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
+ if (sk_acceptq_is_full(sk)) {
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
goto drop;
}
\
 
 \ /
  Last update: 2020-01-08 20:50    [W:1.059 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site