lkml.org 
[lkml]   [2019]   [Jul]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH bpf v2] xdp: fix race on generic receive path
On Wed, Jul 3, 2019 at 6:20 AM Magnus Karlsson
<magnus.karlsson@gmail.com> wrote:
>
> On Wed, Jul 3, 2019 at 2:09 PM Ilya Maximets <i.maximets@samsung.com> wrote:
> >
> > Unlike driver mode, generic xdp receive could be triggered
> > by different threads on different CPU cores at the same time
> > leading to the fill and rx queue breakage. For example, this
> > could happen while sending packets from two processes to the
> > first interface of veth pair while the second part of it is
> > open with AF_XDP socket.
> >
> > Need to take a lock for each generic receive to avoid race.
>
> I measured the performance degradation of rxdrop on my local machine
> and it went from 2.19 to 2.08, so roughly a 5% drop. I think we can
> live with this in XDP_SKB mode. If we at some later point in time need
> to boost performance in this mode, let us look at it then from a
> broader perspective and find the most low hanging fruit.
>
> Thanks Ilya for this fix.
>
> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
>
> > Fixes: c497176cb2e4 ("xsk: add Rx receive functions and poll support")
> > Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
> > ---

Tested on my machine and works ok.
Tested-by: William Tu <u9012063@gmail.com>

\
 
 \ /
  Last update: 2019-07-03 18:49    [W:0.445 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site