lkml.org 
[lkml]   [2019]   [Jul]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH] tipc: ensure skb->lock is initialised
Date


> -----Original Message-----
> From: Eric Dumazet <eric.dumazet@gmail.com>
> Sent: 9-Jul-19 09:46
> To: Jon Maloy <jon.maloy@ericsson.com>; Eric Dumazet
> <eric.dumazet@gmail.com>; Chris Packham
> <Chris.Packham@alliedtelesis.co.nz>; ying.xue@windriver.com;
> davem@davemloft.net
> Cc: netdev@vger.kernel.org; tipc-discussion@lists.sourceforge.net; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH] tipc: ensure skb->lock is initialised
>
>
>
> On 7/9/19 3:25 PM, Jon Maloy wrote:

[...]

> > TIPC is using the list lock at message reception within the scope of
> tipc_sk_rcv()/tipc_skb_peek_port(), so it is fundamental that the lock always
> is correctly initialized.
>
> Where is the lock acquired, why was it only acquired by queue purge and not
> normal dequeues ???

It is acquired twice:
- First, in tipc_sk_rcv()->tipc_skb_peek_port(), to peek into one or more queue members and read their destination port number.
- Second, in tipc_sk_rcv()->tipc_sk_enqueue()->tipc_skb_dequeue() to unlink a list member from the queue.

> >>
> > [...]
> >>
> >> tipc_link_xmit() for example never acquires the spinlock, yet uses
> >> skb_peek() and __skb_dequeue()
> >
> >
> > You should look at tipc_node_xmit instead. Node local messages are
> > sent directly to tipc_sk_rcv(), and never go through tipc_link_xmit()
>
> tipc_node_xmit() calls tipc_link_xmit() eventually, right ?

No.
tipc_link_xmit() is called only for messages with a non-local destination. Otherwise, tipc_node_xmit() sends node local messages directly to the destination socket via tipc_sk_rcv().
The argument 'xmitq' becomes 'inputq' in tipc_sk_rcv() and 'list' in tipc_skb_peek_port(), since those functions don't distinguish between local and node external incoming messages.

>
> Please show me where the head->lock is acquired, and why it needed.

The argument 'inputq' to tipc_sk_rcv() may come from two sources:
1) As an aggregated member of each tipc_node::tipc_link_entry. This queue is needed to guarantee sequential delivery of messages from the node/link layer to the socket layer. In this case, there may be buffers for multiple destination sockets in the same queue, and we may have multiple concurrent tipc_sk_rcv() jobs working that queue. So, the lock is needed both for adding (in link.c::tipc_data_input()), peeking and removing buffers.

2) The case you have been looking at, where it is created as 'xmitq' on the stack by a local socket. Here, the lock is not strictly needed, as you have observed. But to reduce code duplication we have chosen to let the code in tipc_sk_rcv() handle both types of queues uniformly, i.e., as if they all contain buffers with potentially multiple destination sockets, worked on by multiple concurrent calls. This requires that the lock is initialized even for this type of queue. We have seen no measurable performance difference between this 'generic' reception algorithm and a tailor-made ditto for local messages only, while the amount of saved code is significant.

>
> If this is mandatory, then more fixes are needed than just initializing the lock
> for lockdep purposes.

It is not only for lockdep purposes, -it is essential. But please provide details about where you see that more fixes are needed.

BR
///jon


\
 
 \ /
  Last update: 2019-07-09 22:17    [W:0.085 / U:0.440 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site