lkml.org 
[lkml]   [2017]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: NVMe vs DMA addressing limitations
On Tue, Jan 10, 2017 at 12:01:05PM +0100, Arnd Bergmann wrote:
> Another workaround me might need is to limit amount of concurrent DMA
> in the NVMe driver based on some platform quirk. The way that NVMe works,
> it can have very large amounts of data that is concurrently mapped into
> the device.

That's not really just NVMe - other storage and network controllers also
can DMA map giant amounts of memory. There are a couple aspects to it:

- dma coherent memoery - right now NVMe doesn't use too much of it,
but upcoming low-end NVMe controllers will soon start to require
fairl large amounts of it for the host memory buffer feature that
allows for DRAM-less controller designs. As an interesting quirk
that is memory only used by the PCIe devices, and never accessed
by the Linux host at all.

- size vs number of the dynamic mapping. We probably want the dma_ops
specify a maximum mapping size for a given device. As long as we
can make progress with a few mappings swiotlb / the iommu can just
fail mapping and the driver will propagate that to the block layer
that throttles I/O.

\
 
 \ /
  Last update: 2017-01-10 15:49    [W:0.073 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site