lkml.org 
[lkml]   [2020]   [Oct]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Remove __napi_schedule_irqoff?
From
Date
On 18.10.2020 19:19, Jakub Kicinski wrote:
> On Sun, 18 Oct 2020 10:20:41 +0200 Heiner Kallweit wrote:
>>>> Otherwise a non-solution could be to make IRQ_FORCED_THREADING
>>>> configurable.
>>>
>>> I have to say I do not understand why we want to defer to a thread the
>>> hard IRQ that we use in NAPI model.
>>>
>> Seems like the current forced threading comes with the big hammer and
>> thread-ifies all hard irq's. To avoid this all NAPI network drivers
>> would have to request the interrupt with IRQF_NO_THREAD.
>
> Right, it'd work for some drivers. Other drivers try to take spin locks
> in their IRQ handlers.
>
> What gave me a pause was that we have a busy loop in napi_schedule_prep:
>
> bool napi_schedule_prep(struct napi_struct *n)
> {
> unsigned long val, new;
>
> do {
> val = READ_ONCE(n->state);
> if (unlikely(val & NAPIF_STATE_DISABLE))
> return false;
> new = val | NAPIF_STATE_SCHED;
>
> /* Sets STATE_MISSED bit if STATE_SCHED was already set
> * This was suggested by Alexander Duyck, as compiler
> * emits better code than :
> * if (val & NAPIF_STATE_SCHED)
> * new |= NAPIF_STATE_MISSED;
> */
> new |= (val & NAPIF_STATE_SCHED) / NAPIF_STATE_SCHED *
> NAPIF_STATE_MISSED;
> } while (cmpxchg(&n->state, val, new) != val);
>
> return !(val & NAPIF_STATE_SCHED);
> }
>
>
> Dunno how acceptable this is to run in an IRQ handler on RT..
>
If I understand this code right then it's not a loop that actually
waits for something. It just retries if the value of n->state has
changed in between. So I don't think we'll ever see the loop being
executed more than twice.

\
 
 \ /
  Last update: 2020-10-18 19:58    [W:0.077 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site