lkml.org 
[lkml]   [2021]   [Jan]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH net-next v1 1/6] lan743x: boost performance on cpu archs w/o dma cache snooping
> diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
> index f1f6eba4ace4..f485320e5784 100644
> --- a/drivers/net/ethernet/microchip/lan743x_main.c
> +++ b/drivers/net/ethernet/microchip/lan743x_main.c
> @@ -1957,11 +1957,11 @@ static int lan743x_rx_next_index(struct lan743x_rx *rx, int index)
>
> static struct sk_buff *lan743x_rx_allocate_skb(struct lan743x_rx *rx)
> {
> - int length = 0;
> + struct net_device *netdev = rx->adapter->netdev;
>
> - length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
> - return __netdev_alloc_skb(rx->adapter->netdev,
> - length, GFP_ATOMIC | GFP_DMA);
> + return __netdev_alloc_skb(netdev,
> + netdev->mtu + ETH_HLEN + 4 + RX_HEAD_PADDING,
> + GFP_ATOMIC | GFP_DMA);
> }
>
> static void lan743x_rx_update_tail(struct lan743x_rx *rx, int index)
> @@ -1977,9 +1977,10 @@ static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
> {
> struct lan743x_rx_buffer_info *buffer_info;
> struct lan743x_rx_descriptor *descriptor;
> - int length = 0;
> + struct net_device *netdev = rx->adapter->netdev;
> + int length;

Please keep to reverse christmass tree.
>
> - length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
> + length = netdev->mtu + ETH_HLEN + 4 + RX_HEAD_PADDING;
> descriptor = &rx->ring_cpu_ptr[index];
> buffer_info = &rx->buffer_info[index];
> buffer_info->skb = skb;
> @@ -2148,11 +2149,18 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
> descriptor = &rx->ring_cpu_ptr[first_index];
>
> /* unmap from dma */
> + packet_length = RX_DESC_DATA0_FRAME_LENGTH_GET_
> + (descriptor->data0);
> if (buffer_info->dma_ptr) {
> - dma_unmap_single(&rx->adapter->pdev->dev,
> - buffer_info->dma_ptr,
> - buffer_info->buffer_length,
> - DMA_FROM_DEVICE);
> + dma_sync_single_for_cpu(&rx->adapter->pdev->dev,
> + buffer_info->dma_ptr,
> + packet_length,
> + DMA_FROM_DEVICE);
> + dma_unmap_single_attrs(&rx->adapter->pdev->dev,
> + buffer_info->dma_ptr,
> + buffer_info->buffer_length,
> + DMA_FROM_DEVICE,
> + DMA_ATTR_SKIP_CPU_SYNC);

So this patch appears to contain two different changes
1) You only allocate a receive buffer as big as the MTU plus overheads
2) You change the cache operations to operate on the received length.

The first change should be completely safe, and i guess, is giving
most of the benefits. The second one is where interesting things might
happen. So please split this patch into two. If it does break, we can
git bisect, and probably end up on the second patch.

Thanks
Andrew

\
 
 \ /
  Last update: 2021-01-29 21:41    [W:0.316 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site