lkml.org 
[lkml]   [2018]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH v2] ptr_ring: linked list fallback
On Wed, Feb 28, 2018 at 10:20:33PM +0800, Jason Wang wrote:
>
>
> On 2018年02月28日 22:01, Michael S. Tsirkin wrote:
> > On Wed, Feb 28, 2018 at 02:28:21PM +0800, Jason Wang wrote:
> > >
> > > On 2018年02月28日 12:09, Michael S. Tsirkin wrote:
> > > > > > Or we can add plist to a union:
> > > > > >
> > > > > >
> > > > > > struct sk_buff {
> > > > > > union {
> > > > > > struct {
> > > > > > /* These two members must be first. */
> > > > > > struct sk_buff *next;
> > > > > > struct sk_buff *prev;
> > > > > > union {
> > > > > > struct net_device *dev;
> > > > > > /* Some protocols might use this space to store information,
> > > > > > * while device pointer would be NULL.
> > > > > > * UDP receive path is one user.
> > > > > > */
> > > > > > unsigned long dev_scratch;
> > > > > > };
> > > > > > };
> > > > > > struct rb_node rbnode; /* used in netem & tcp stack */
> > > > > > + struct plist plist; /* For use with ptr_ring */
> > > > > > };
> > > > > >
> > > > > This look ok.
> > > > >
> > > > > > > For XDP, we need to embed plist in struct xdp_buff too,
> > > > > > Right - that's pretty straightforward, isn't it?
> > > > > Yes, it's not clear to me this is really needed for XDP consider the lock
> > > > > contention it brings.
> > > > >
> > > > > Thanks
> > > > The contention is only when the ring overflows into the list though.
> > > >
> > > Right, but there's usually a mismatch of speed between producer and
> > > consumer. In case of a fast producer, we may get this contention very
> > > frequently.
> > >
> > > Thanks
> > This is not true in my experiments. In my experiments, ring size of 4k
> > bytes is enough to see packet drops in single %s of cases.
> >
> > To you have workloads where rings are full most of the time?
>
> E.g using xdp_redirect to redirect packets from ixgbe to tap. In my test,
> ixgeb can produce ~8Mpps. But vhost can only consume ~3.5Mpps.

Then you are better off just using a small ring and dropping
packets early, right?

> >
> > One other nice side effect of this patch is that instead of dropping
> > packets quickly it slows down producer to match consumer speeds.
>
> In some case, producer may not want to be slowed down, e.g in devmap which
> can redirect packets into several different interfaces.
> > IOW, it can go either way in theory, we will need to test and see the effect.
> >
>
> Yes.
>
> Thanks

\
 
 \ /
  Last update: 2018-02-28 16:44    [W:0.049 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site