lkml.org 
[lkml]   [2019]   [Dec]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited
Date
Hi, Christoph.


On Wed, 2019-12-04 at 14:03 +0100, Christoph Hellwig wrote:
> Devices that are forced to DMA through swiotlb need to be treated as
> if
> they are addressing limited.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> include/linux/dma-direct.h | 1 +
> kernel/dma/direct.c | 8 ++++++--
> kernel/dma/mapping.c | 3 +++
> 3 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
> index 24b8684aa21d..83aac21434c6 100644
> --- a/include/linux/dma-direct.h
> +++ b/include/linux/dma-direct.h
> @@ -85,4 +85,5 @@ int dma_direct_mmap(struct device *dev, struct
> vm_area_struct *vma,
> void *cpu_addr, dma_addr_t dma_addr, size_t size,
> unsigned long attrs);
> int dma_direct_supported(struct device *dev, u64 mask);
> +bool dma_direct_addressing_limited(struct device *dev);
> #endif /* _LINUX_DMA_DIRECT_H */
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 6af7ae83c4ad..450f3abe5cb5 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -497,11 +497,15 @@ int dma_direct_supported(struct device *dev,
> u64 mask)
> return mask >= __phys_to_dma(dev, min_mask);
> }
>
> +bool dma_direct_addressing_limited(struct device *dev)
> +{
> + return force_dma_unencrypted(dev) || swiotlb_force ==
> SWIOTLB_FORCE;
> +}
> +
> size_t dma_direct_max_mapping_size(struct device *dev)
> {
> /* If SWIOTLB is active, use its maximum mapping size */
> - if (is_swiotlb_active() &&
> - (dma_addressing_limited(dev) || swiotlb_force ==
> SWIOTLB_FORCE))
> + if (is_swiotlb_active() && dma_addressing_limited(dev))
> return swiotlb_max_mapping_size(dev);
> return SIZE_MAX;
> }
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 1dbe6d725962..ebc60633d89a 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -416,6 +416,9 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary);
> */
> bool dma_addressing_limited(struct device *dev)
> {
> + if (dma_is_direct(get_dma_ops(dev)) &&
> + dma_direct_addressing_limited(dev))
> + return true;

This works fine for vmwgfx, for which the below expression is always 0.
But it looks like the only current user of dma_addressing_limited
outside of the dma code, radeon, actually wants only the below
expression to force GFP_DMA32 page allocations when the devices have
limited dma address space. Perhaps Christian can elaborate on that.

So in the end it looks like we have two different use cases. One to
force coherent memory (vmwgfx, possibly other grahpics drivers) or
reduced queue depth (vmw_pvscsi) when we have bounce-buffering.

The other one is to force GFP_DMA32 page allocation when the device
dma-addressing is limited. Perhaps this mode can be replaced by using
dma_coherent memory and stripped that functionality from TTM?

> return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
> dma_get_required_mask(dev);
> }


Thanks,
Thomas

\
 
 \ /
  Last update: 2019-12-06 15:11    [W:0.044 / U:0.612 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site