lkml.org 
[lkml]   [2016]   [Jun]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 4/5] netdev: implement infrastructure for threadable napi irq
From
Date
On Thu, 2016-06-16 at 04:19 -0700, Eric Dumazet wrote:
> On Thu, Jun 16, 2016 at 3:39 AM, Paolo Abeni <pabeni@redhat.com> wrote:
> > We used a different setup to explicitly avoid the (guest) userspace
> > starvation issue. Using a guest with 2vCPUs (or more) and a single queue
> > avoids the starvation issue, because the scheduler moves the user space
> > processes on a different vCPU in respect to the ksoftirqd thread.
> >
> > In the hypervisor, with a vanilla kernel, the qemu process receives a
> > fair share of the cpu time, but considerably less 100%, and his
> > performances are bounded to a considerable lower throughput than the
> > theoretical one.
> >
>
> Completely different setup than last time. I am kind of lost.
>
> Are you trying to find the optimal way to demonstrate your patch can be useful ?
>
> In a case with 2 vcpus, then the _standard_ kernel will migrate the
> user thread on the cpu not used by the IRQ,
> once process scheduler can see two threads competing on one cpu
> (ksoftirqd and the user thread), and the other cpu being idle.
>
> Trying to shift the IRQ 'thread' is not nice, since the hardware IRQ
> will be delivered on the wrong cpu.
>
> Unless user space forces cpu pinning ? Then tell the user it should not.
>
> The natural choice is to put both producer and consumer on same cpu
> for cache locality reasons (wake affine),
> but in stress mode allow to run the consumer on another cpu if available.
>
> If the process scheduler fails to migrate the producer, then there is
> a bug needing to be fixed.

I guess you means 'consumer' here. The scheduler doesn't fail to migrate
it: the consumer is actually migrated a lot of times, but on each cpu a
competing and running ksoftirqd thread is found.

The general problem is that under significant network load (not
necessary udp flood, similar behavior is observed even with TCP_RR
tests), with enough rx queue available and enough flows running, no
single thread/process can use 100% of any cpu, even if the overall
capacity would allow it.

Paolo

\
 
 \ /
  Last update: 2016-06-16 14:41    [W:0.051 / U:6.540 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site