lkml.org 
[lkml]   [2016]   [Dec]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH V2 net 10/20] net/ena: remove redundant logic in napi callback for busy poll mode
From
Date


On 12/05/2016 07:45 AM, Eric Dumazet wrote:
> On Sun, 2016-12-04 at 15:19 +0200, Netanel Belgazal wrote:
>> sk_busy_loop can call the napi callback few million times a sec.
>> For each call there is unmask interrupt.
>> We want to reduce the number of unmasks.
>>
>> Add an atomic variable that will tell the napi handler if
>> it was called from irq context or not.
>> Unmask the interrupt only from irq context.
>>
>> A schenario where the driver left with missed unmask isn't feasible.
>> when ena_intr_msix_io is called the driver have 2 options:
>> 1)Before napi completes and call napi_complete_done
>> 2)After calling napi_complete_done
>>
>> In the former case the napi will unmask the interrupt as needed.
>> In the latter case napi_complete_done will remove napi from the schedule
>> list so napi will be rescheduled (by ena_intr_msix_io) and interrupt
>> will be unmasked as desire in the 2nd napi call.
>>
>> Signed-off-by: Netanel Belgazal <netanel@annapurnalabs.com>
>> ---
>
> This looks very complicated to me.
>
> I guess you missed the recent patches that happened on net-next ?

You are correct.
I didn't see the patches.
It is much better to use the napi_complete_done() return value.
I'll rework my patch.

>
> 2e713283751f494596655d9125c168aeb913f71d net/mlx4_en: use napi_complete_done() return value
> 364b6055738b4c752c30ccaaf25c624e69d76195 net: busy-poll: return busypolling status to drivers
> 21cb84c48ca0619181106f0f44f3802a989de024 net: busy-poll: remove need_resched() from sk_can_busy_loop()
> 217f6974368188fd8bd7804bf5a036aa5762c5e4 net: busy-poll: allow preemption in sk_busy_loop()
>
> napi_complete_done() return code can be used by a driver,
> no need to add yet another atomic operation in fast path.
>
> Anyway, this looks wrong :
>
> @@ -1186,6 +1201,7 @@ static irqreturn_t ena_intr_msix_io(int irq, void *data)
> {
> struct ena_napi *ena_napi = data;
>
> + atomic_set(&ena_napi->unmask_interrupt, 1);
> napi_schedule(&ena_napi->napi);
>
> You probably wanted :
>
> if (napi_schedule_prep(n)) {
> atomic_set(&ena_napi->unmask_interrupt, 1);
> __napi_schedule(n);
> }
>
>
>
> Please rework this napi poll using core infrastructure.
>
> busypoll logic should be centralized, not reimplemented in different ways in a driver.
>
> Thanks.
>
>
>

\
 
 \ /
  Last update: 2016-12-05 19:30    [W:0.170 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site