lkml.org 
[lkml]   [2015]   [Oct]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/3] virtio_ring: Support DMA APIs
On Tue, Oct 27, 2015 at 07:13:56PM -0700, Andy Lutomirski wrote:
> On Tue, Oct 27, 2015 at 7:06 PM, Joerg Roedel <jroedel@suse.de> wrote:
> > Hi Andy,
> >
> > On Tue, Oct 27, 2015 at 06:17:09PM -0700, Andy Lutomirski wrote:
> >> From: Andy Lutomirski <luto@amacapital.net>
> >>
> >> virtio_ring currently sends the device (usually a hypervisor)
> >> physical addresses of its I/O buffers. This is okay when DMA
> >> addresses and physical addresses are the same thing, but this isn't
> >> always the case. For example, this never works on Xen guests, and
> >> it is likely to fail if a physical "virtio" device ever ends up
> >> behind an IOMMU or swiotlb.
> >
> > The overall code looks good, but I havn't seen and dma_sync* calls.
> > When swiotlb=force is in use this would break.
> >
> >> + vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, vring_map_single(
> >> + vq,
> >> + desc, total_sg * sizeof(struct vring_desc),
> >> + DMA_TO_DEVICE));
> >
>
> Are you talking about a dma_sync call on the descriptor ring itself?
> Isn't dma_alloc_coherent supposed to make that unnecessary? I should
> move the allocation into the virtqueue code.
>
> The docs suggest that I might need to "flush the processor's write
> buffers before telling devices to read that memory". I'm not sure how
> to do that.

The write buffers should be flushed by the dma-api functions if
necessary. For dma_alloc_coherent allocations you don't need to call
dma_sync*, but for the map_single/map_page/map_sg ones, as these might
be bounce-buffered.


Joerg



\
 
 \ /
  Last update: 2015-10-28 03:41    [W:0.083 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site