lkml.org 
[lkml]   [2018]   [Oct]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.14 012/137] iommu/amd: make sure TLB to be flushed before IOVA freed
    Date
    4.14-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Zhen Lei <thunder.leizhen@huawei.com>

    [ Upstream commit 3c120143f584360a13614787e23ae2cdcb5e5ccd ]

    Although the mapping has already been removed in the page table, it maybe
    still exist in TLB. Suppose the freed IOVAs is reused by others before the
    flush operation completed, the new user can not correctly access to its
    meomory.

    Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
    Fixes: b1516a14657a ('iommu/amd: Implement flush queue')
    Signed-off-by: Joerg Roedel <jroedel@suse.de>
    Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    drivers/iommu/amd_iommu.c | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    --- a/drivers/iommu/amd_iommu.c
    +++ b/drivers/iommu/amd_iommu.c
    @@ -2400,9 +2400,9 @@ static void __unmap_single(struct dma_op
    }

    if (amd_iommu_unmap_flush) {
    - dma_ops_free_iova(dma_dom, dma_addr, pages);
    domain_flush_tlb(&dma_dom->domain);
    domain_flush_complete(&dma_dom->domain);
    + dma_ops_free_iova(dma_dom, dma_addr, pages);
    } else {
    pages = __roundup_pow_of_two(pages);
    queue_iova(&dma_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);

    \
     
     \ /
      Last update: 2018-10-02 15:53    [W:3.854 / U:1.036 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site