lkml.org 
[lkml]   [2011]   [Dec]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] virtio-ring: Use threshold for switching to indirect descriptors
On Thu, Dec 01, 2011 at 10:09:37AM +0200, Sasha Levin wrote:
> On Thu, 2011-12-01 at 09:58 +0200, Michael S. Tsirkin wrote:
> > On Thu, Dec 01, 2011 at 01:12:25PM +1030, Rusty Russell wrote:
> > > On Wed, 30 Nov 2011 18:11:51 +0200, Sasha Levin <levinsasha928@gmail.com> wrote:
> > > > On Tue, 2011-11-29 at 16:58 +0200, Avi Kivity wrote:
> > > > > On 11/29/2011 04:54 PM, Michael S. Tsirkin wrote:
> > > > > > >
> > > > > > > Which is actually strange, weren't indirect buffers introduced to make
> > > > > > > the performance *better*? From what I see it's pretty much the
> > > > > > > same/worse for virtio-blk.
> > > > > >
> > > > > > I know they were introduced to allow adding very large bufs.
> > > > > > See 9fa29b9df32ba4db055f3977933cd0c1b8fe67cd
> > > > > > Mark, you wrote the patch, could you tell us which workloads
> > > > > > benefit the most from indirect bufs?
> > > > > >
> > > > >
> > > > > Indirects are really for block devices with many spindles, since there
> > > > > the limiting factor is the number of requests in flight. Network
> > > > > interfaces are limited by bandwidth, it's better to increase the ring
> > > > > size and use direct buffers there (so the ring size more or less
> > > > > corresponds to the buffer size).
> > > > >
> > > >
> > > > I did some testing of indirect descriptors under different workloads.
> > >
> > > MST and I discussed getting clever with dynamic limits ages ago, but it
> > > was down low on the TODO list. Thanks for diving into this...
> > >
> > > AFAICT, if the ring never fills, direct is optimal. When the ring
> > > fills, indirect is optimal (we're better to queue now than later).
> > >
> > > Why not something simple, like a threshold which drops every time we
> > > fill the ring?
> > >
> > > struct vring_virtqueue
> > > {
> > > ...
> > > int indirect_thresh;
> > > ...
> > > }
> > >
> > > virtqueue_add_buf_gfp()
> > > {
> > > ...
> > >
> > > if (vq->indirect &&
> > > (vq->vring.num - vq->num_free) + out + in > vq->indirect_thresh)
> > > return indirect()
> > > ...
> > >
> > > if (vq->num_free < out + in) {
> > > if (vq->indirect && vq->indirect_thresh > 0)
> > > vq->indirect_thresh--;
> > >
> > > ...
> > > }
> > >
> > > Too dumb?
> > >
> > > Cheers,
> > > Rusty.
> >
> > We'll presumably need some logic to increment is back,
> > to account for random workload changes.
> > Something like slow start?
>
> We can increment it each time the queue was less than 10% full, it
> should act like slow start, no?

No, we really shouldn't get an empty ring as long as things behave
well. What I meant is something like:

#define VIRTIO_DECREMENT 2
#define VIRTIO_INCREMENT 1
if (vq->num_free < out + in) {
if (vq->indirect && vq->indirect_thresh > VIRTIO_DECREMENT)
vq->indirect_thresh /= VIRTIO_DECREMENT;
} else {
if (vq->indirect_thresh < vq->num)
vq->indirect_thresh += VIRTIO_INCREMENT;
}

So we try to avoid indirect but the moment there's no space, we decrease
the threshold drastically. If you make the increment/decrement module
parameters it's easy to try different values.


> --
>
> Sasha.


\
 
 \ /
  Last update: 2011-12-01 11:27    [W:0.127 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site