lkml.org 
[lkml]   [2019]   [Feb]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][Patch v8 5/7] virtio: Enables to add a single descriptor to the host
On Wed, 6 Feb 2019 07:56:37 -0500
Nitesh Narayan Lal <nitesh@redhat.com> wrote:

> On 2/5/19 3:49 PM, Michael S. Tsirkin wrote:
> > On Mon, Feb 04, 2019 at 03:18:52PM -0500, Nitesh Narayan Lal wrote:
> >> This patch enables the caller to expose a single buffers to the
> >> other end using vring descriptor. It also allows the caller to
> >> perform this action in synchornous manner by using virtqueue_kick_sync.
> >>
> >> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com>
> > I am not sure why do we need this API. Polling in guest
> > until host runs isn't great either since these
> > might be running on the same host CPU.
> True.
>
> However, my understanding is that the existing API such as
> virtqueue_add_outbuf() requires an allocation which will be problematic
> for my implementation.
> Although I am not blocking the allocation path during normal Linux
> kernel usage as even if one of the zone is locked the other zone could
> be used to get free pages.
> But during the initial boot time (device initialization), in certain
> situations the allocation can only come from a single zone, acquiring a
> lock on it may result in a deadlock situation.

I might be wrong, but if I remember correctly, this was true for
your previous implementation where you'd report page hinting down
from arch_free_page() so you couldn't allocate memory. But this
is not the case anymore.

>
> >
> >
> >
> >> ---
> >> drivers/virtio/virtio_ring.c | 72 ++++++++++++++++++++++++++++++++++++
> >> include/linux/virtio.h | 4 ++
> >> 2 files changed, 76 insertions(+)
> >>
> >> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> >> index cd7e755484e3..93c161ac6a28 100644
> >> --- a/drivers/virtio/virtio_ring.c
> >> +++ b/drivers/virtio/virtio_ring.c
> >> @@ -1695,6 +1695,52 @@ static inline int virtqueue_add(struct virtqueue *_vq,
> >> out_sgs, in_sgs, data, ctx, gfp);
> >> }
> >>
> >> +/**
> >> + * virtqueue_add_desc - add a buffer to a chain using a vring desc
> >> + * @vq: the struct virtqueue we're talking about.
> >> + * @addr: address of the buffer to add.
> >> + * @len: length of the buffer.
> >> + * @in: set if the buffer is for the device to write.
> >> + *
> >> + * Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO).
> >> + */
> >> +int virtqueue_add_desc(struct virtqueue *_vq, u64 addr, u32 len, int in)
> >> +{
> >> + struct vring_virtqueue *vq = to_vvq(_vq);
> >> + struct vring_desc *desc = vq->split.vring.desc;
> >> + u16 flags = in ? VRING_DESC_F_WRITE : 0;
> >> + unsigned int i;
> >> + void *data = (void *)addr;
> >> + int avail_idx;
> >> +
> >> + /* Sanity check */
> >> + if (!_vq)
> >> + return -EINVAL;
> >> +
> >> + START_USE(vq);
> >> + if (unlikely(vq->broken)) {
> >> + END_USE(vq);
> >> + return -EIO;
> >> + }
> >> +
> >> + i = vq->free_head;
> >> + flags &= ~VRING_DESC_F_NEXT;
> >> + desc[i].flags = cpu_to_virtio16(_vq->vdev, flags);
> >> + desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
> >> + desc[i].len = cpu_to_virtio32(_vq->vdev, len);
> >> +
> >> + vq->vq.num_free--;
> >> + vq->free_head = virtio16_to_cpu(_vq->vdev, desc[i].next);
> >> + vq->split.desc_state[i].data = data;
> >> + vq->split.avail_idx_shadow = 1;
> >> + avail_idx = vq->split.avail_idx_shadow;
> >> + vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev, avail_idx);
> >> + vq->num_added = 1;
> >> + END_USE(vq);
> >> + return 0;
> >> +}
> >> +EXPORT_SYMBOL_GPL(virtqueue_add_desc);
> >> +
> >> /**
> >> * virtqueue_add_sgs - expose buffers to other end
> >> * @vq: the struct virtqueue we're talking about.
> >> @@ -1842,6 +1888,32 @@ bool virtqueue_notify(struct virtqueue *_vq)
> >> }
> >> EXPORT_SYMBOL_GPL(virtqueue_notify);
> >>
> >> +/**
> >> + * virtqueue_kick_sync - update after add_buf and busy wait till update is done
> >> + * @vq: the struct virtqueue
> >> + *
> >> + * After one or more virtqueue_add_* calls, invoke this to kick
> >> + * the other side. Busy wait till the other side is done with the update.
> >> + *
> >> + * Caller must ensure we don't call this with other virtqueue
> >> + * operations at the same time (except where noted).
> >> + *
> >> + * Returns false if kick failed, otherwise true.
> >> + */
> >> +bool virtqueue_kick_sync(struct virtqueue *vq)
> >> +{
> >> + u32 len;
> >> +
> >> + if (likely(virtqueue_kick(vq))) {
> >> + while (!virtqueue_get_buf(vq, &len) &&
> >> + !virtqueue_is_broken(vq))
> >> + cpu_relax();
> >> + return true;
> >> + }
> >> + return false;
> >> +}
> >> +EXPORT_SYMBOL_GPL(virtqueue_kick_sync);
> >> +
> >> /**
> >> * virtqueue_kick - update after add_buf
> >> * @vq: the struct virtqueue
> >> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> >> index fa1b5da2804e..58943a3a0e8d 100644
> >> --- a/include/linux/virtio.h
> >> +++ b/include/linux/virtio.h
> >> @@ -57,6 +57,10 @@ int virtqueue_add_sgs(struct virtqueue *vq,
> >> unsigned int in_sgs,
> >> void *data,
> >> gfp_t gfp);
> >> +/* A desc with this init id is treated as an invalid desc */
> >> +int virtqueue_add_desc(struct virtqueue *_vq, u64 addr, u32 len, int in);
> >> +
> >> +bool virtqueue_kick_sync(struct virtqueue *vq);
> >>
> >> bool virtqueue_kick(struct virtqueue *vq);
> >>
> >> --
> >> 2.17.2

\
 
 \ /
  Last update: 2019-02-06 14:16    [W:0.165 / U:0.868 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site