lkml.org 
[lkml]   [2015]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH V5 2/3] dma: add Qualcomm Technologies HIDMA management driver
From
Date
On 11/16/2015 10:58 AM, Arnd Bergmann wrote:
>> The management driver is executed in hypervisor context and
>> > is the main management entity for all channels provided by
>> > the device.
> Sorry for asking this question so late, but can you explain what the
> point is behind this? It seems counterintuitive to me to have a
> DMA engine that is meant for speeding up memory-to-memory transfers
> when you run it in a virtual machine where you either need to go
> through a virtual IOMMU to set up page table entries, as that will
> likely cause more performance overhead than you could possibly
> gain, or you assume that all the guest memory is pinned, which
> in turn destroys a lot of the assumptions that we are making
> in KVM to have useful VM guests.
>
> Where am I going wrong here?
>

The behavior of HIDMA is not any different from PCIe. We are using
platform device pass through and giving the control of the entire HIDMA
device to the guest machine. Therefore, we don’t need to trap into host
machine for driver execution.

I agree with the fact that the pages need to be pinned for this to work.
Again, this is not any different from PCIe SRIOV passthrough.

Pinning guest removes use cases like ballooning/overcommit but that is a
choice for end user to make: whether he wants additional I/O performance
or wants higher memory utilization at the cost of lower I/O performance.

--
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
Linux Foundation Collaborative Project


\
 
 \ /
  Last update: 2015-11-16 17:41    [W:0.034 / U:0.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site