lkml.org 
[lkml]   [2017]   [Jan]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] arm64: Fix swiotlb fallback allocation
On Mon, Jan 16, 2017 at 12:46:33PM +0100, Alexander Graf wrote:
> Commit b67a8b29df introduced logic to skip swiotlb allocation when all memory
> is DMA accessible anyway.
>
> While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
> to allocate memory when there is no CMA memory available. The swiotlb code is
> called to ensure that we at least try get_free_pages().
>
> Without initialization, swiotlb allocation code tries to access io_tlb_list
> which is NULL. That results in a stack trace like this:
>
> Unable to handle kernel NULL pointer dereference at virtual address 00000000
> [...]
> [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
> [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
> [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
> [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
> [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
> [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
> [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
> [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
> [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
> [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
> [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
> [...]
>
> Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
> option. This patch configures the swiotlb code to use that if we decide not to
> initialize the swiotlb framework.
>
> Fixes: b67a8b29df ("arm64: mm: only initialize swiotlb when necessary")
> Signed-off-by: Alexander Graf <agraf@suse.de>
> CC: Catalin Marinas <catalin.marinas@arm.com>
> CC: Jisheng Zhang <jszhang@marvell.com>
> CC: Geert Uytterhoeven <geert+renesas@glider.be>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thanks for the fix.

BTW, I wonder whether we also need to improve the original commit
slightly, in case we get a device mask smaller than what max_pfn covers:

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index e04082700bb1..23090db2f5ba 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -349,7 +349,7 @@ static int __swiotlb_dma_supported(struct device *hwdev, u64 mask)
{
if (swiotlb)
return swiotlb_dma_supported(hwdev, mask);
- return 1;
+ return phys_to_dma(hwdev, PFN_PHYS(max_pfn) - 1) <= mask;
}

static struct dma_map_ops swiotlb_dma_ops = {
--
Catalin

\
 
 \ /
  Last update: 2017-01-17 13:16    [W:0.479 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site