lkml.org 
[lkml]   [2018]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC 1/2] softirq: Defer net rx/tx processing to ksoftirqd context
On Thu, Jan 11, 2018 at 10:48 AM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Thu, Jan 11, 2018 at 8:32 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>> On Thu, Jan 11, 2018 at 08:20:18AM -0800, Eric Dumazet wrote:
>>> diff --git a/kernel/softirq.c b/kernel/softirq.c
>>> index 2f5e87f1bae22f3df44fa4493fcc8b255882267f..d2f20daf77d14dc8ebde00d7c4a0237152d082ba
>>> 100644
>>> --- a/kernel/softirq.c
>>> +++ b/kernel/softirq.c
>>> @@ -192,7 +192,7 @@ EXPORT_SYMBOL(__local_bh_enable_ip);
>>>
>>> /*
>>> * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
>>> - * but break the loop if need_resched() is set or after 2 ms.
>>> + * but break the loop after 2 ms.
>>> * The MAX_SOFTIRQ_TIME provides a nice upper bound in most cases, but in
>>> * certain cases, such as stop_machine(), jiffies may cease to
>>> * increment and so we need the MAX_SOFTIRQ_RESTART limit as
>>> @@ -299,8 +299,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
>>>
>>> pending = local_softirq_pending();
>>> if (pending) {
>>> - if (time_before(jiffies, end) && !need_resched() &&
>>> - --max_restart)
>>> + if (time_before(jiffies, end) && --max_restart)
>>> goto restart;
>>>
>>> wakeup_softirqd();
>>
>> You just introduced a 2ms preempt-disable region I think, that's not
>> cool for PREEMPT and a plain bug on PREEMPT_RT.
>
> I actually think that he whole "time_before()" is garbage and should be removed.
>
> If doing 10 rounds of softirq processing takes several jiffies, you
> have bigger issue.
>
> Looking at the history, the original time_before() was _replacing_ the
> "max_retry". See commit c10d73671ad3 ("softirq: reduce latencies").
>
> But then "max_restart" was put back in because jiffies just isn't
> reliable in the first place.
>
> And I think *this* is the real issue. Networking does a *shitton* of
> things in softirq context, and the problem is not the softirq code,
> it's the networking code. The softirq code doesn't understand that the
> networking code does billions of operations, and one softirq can take
> ages and ages because it is handling millions of packets.
>
> My feeling is that it's the networking code that should notice "I have
> a billion packets in my softirq queue, and I just keep getting more,
> now *I* should start doing things in a thread instead".
>
> Because none of those limits really make sense for any of the other
> softirq users. Everybody else just wants a low-latency "soft
> interrupt", not some stupid thread. Who knew? Maybe the name should
> have given the thing away?
>
> If you want a thread, use a tasklet and add_task_work().
>
> Maybe networking should reconsider its use of softirqs entirely?

Issue can happen when one packet is delivered (and handled) per NIC IRQ.

Not 'billions' :/

Suppose host receives 99,000 packets per second on one queue (one CPU
handling the particular IRQ),
each packet taking 10 usec to handle (various cache misses and all the overhead)

How are we supposed to know this particular workload is problematic
for innocent user threads on the same cpu ?

Is 'networking' supposed to perform some kind of mid-term analysis of
what happened in the past and/or predict the future ?

\
 
 \ /
  Last update: 2018-01-14 23:24    [W:0.097 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site