lkml.org 
[lkml]   [2012]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4] lpc32xx: Added ethernet driver
Hi Ben,

thank you for your review!

On 03/05/2012 11:45 PM, Ben Hutchings wrote:
> [...]
>> +static int lpc_eth_poll(struct napi_struct *napi, int budget)
>> +{
>> + struct netdata_local *pldat = container_of(napi,
>> + struct netdata_local, napi);
>> + struct net_device *ndev = pldat->ndev;
>> + unsigned long flags;
>> + int rx_done = 0;
>> +
>> + spin_lock_irqsave(&pldat->lock, flags);
>> +
>> + __lpc_handle_xmit(ndev);
>> + rx_done = __lpc_handle_recv(ndev, budget);
>> +
>> + if (rx_done < budget) {
>> + napi_complete(napi);
>> + lpc_eth_enable_int(pldat->net_base);
>> + }
>> +
>> + spin_unlock_irqrestore(&pldat->lock, flags);
>
> This is really sad. You implement NAPI but then take away most of the
> benefits of that by disabling interrupts.
>
> It looks like you could safely unlock pldat->lock before calling
> __lpc_handle_recv - nothing else manipulates RX queue state so no lock
> is required.
>
> As for the TX side, you can probably use the TX queue lock
> (__netif_tx_lock, __netif_tx_unlock) to serialise with
> lpc_eth_hard_start_xmit() and avoid taking pldat->lock in either
> __lpc_handle_xmit() or here.

Sounds reasonable, and will do it.

However, I implemented it from the example of
drivers/net/ethernet/via/via-velocity.c:velocity_poll() - is there a
good reason for doing it that way in the velocity driver or is it done
incorrectly there, also?

Thanks,

Roland


\
 
 \ /
  Last update: 2012-03-06 09:55    [W:0.088 / U:0.848 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site