lkml.org 
[lkml]   [2009]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] tcp: splice as many packets as possible at once
On Sun, Jan 11, 2009 at 02:14:57PM +0100, Eric Dumazet (dada1@cosmosbay.com) wrote:
> >> 1) the release_sock/lock_sock done in tcp_splice_read() is not necessary
> >> to process backlog. Its already done in skb_splice_bits()
> >
> > Yes, in the tcp_splice_read() they are added to remove a deadlock.
>
> Could you elaborate ? A deadlock only if !SPLICE_F_NONBLOCK ?

Sorry, I meant that we drop lock in skb_splice_bits() to prevent the deadlock,
and tcp_splice_read() needs it to process the backlog.

I think that even with non-blocking splice that release_sock/lock_sock
is needed, since we are able to do a parallel job: to receive new data
(scheduled by early release_sock backlog processing) in bh and to
process already received data via splice codepath.
Maybe in non-blocking splice mode this is not an issue though, but for
the blocking mode this allows to grab more skbs at once in skb_splice_bits.

> >
> >> 2) If we loop in tcp_read_sock() calling skb_splice_bits() several times
> >> then we should perform the following tests inside this loop ?
> >>
> >> if (sk->sk_err || sk->sk_state == TCP_CLOSE || (sk->sk_shutdown & RCV_SHUTDOWN) ||
> >> signal_pending(current)) break;
> >>
> >> And removie them from tcp_splice_read() ?
> >
> > It could be done, but for what reason? To detect disconnected socket early?
> > Does it worth the changes?
>
> I was thinking about the case your thread is doing a splice() from tcp socket to
> a pipe, while another thread is doing the splice from this pipe to something else.
>
> Once patched, tcp_read_sock() could loop a long time...

Well, it maybe a good idea... Can not say anything against it :)

--
Evgeniy Polyakov


\
 
 \ /
  Last update: 2009-01-11 14:41    [W:0.722 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site