lkml.org 
[lkml]   [2021]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH v3 6/6] iommu/amd: Sync once for scatter-gather operations
    Date


    > On Jun 15, 2021, at 4:25 AM, Robin Murphy <robin.murphy@arm.com> wrote:
    >
    > On 2021-06-07 19:25, Nadav Amit wrote:
    >> From: Nadav Amit <namit@vmware.com>
    >> On virtual machines, software must flush the IOTLB after each page table
    >> entry update.
    >> The iommu_map_sg() code iterates through the given scatter-gather list
    >> and invokes iommu_map() for each element in the scatter-gather list,
    >> which calls into the vendor IOMMU driver through iommu_ops callback. As
    >> the result, a single sg mapping may lead to multiple IOTLB flushes.
    >> Fix this by adding amd_iotlb_sync_map() callback and flushing at this
    >> point after all sg mappings we set.
    >> This commit is followed and inspired by commit 933fcd01e97e2
    >> ("iommu/vt-d: Add iotlb_sync_map callback").
    >> Cc: Joerg Roedel <joro@8bytes.org>
    >> Cc: Will Deacon <will@kernel.org>
    >> Cc: Jiajun Cao <caojiajun@vmware.com>
    >> Cc: Robin Murphy <robin.murphy@arm.com>
    >> Cc: Lu Baolu <baolu.lu@linux.intel.com>
    >> Cc: iommu@lists.linux-foundation.org
    >> Cc: linux-kernel@vger.kernel.org
    >> Signed-off-by: Nadav Amit <namit@vmware.com>
    >> ---
    >> drivers/iommu/amd/iommu.c | 15 ++++++++++++---
    >> 1 file changed, 12 insertions(+), 3 deletions(-)
    >> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
    >> index 128f2e889ced..dd23566f1db8 100644
    >> --- a/drivers/iommu/amd/iommu.c
    >> +++ b/drivers/iommu/amd/iommu.c
    >> @@ -2027,6 +2027,16 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
    >> return ret;
    >> }
    >> +static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
    >> + unsigned long iova, size_t size)
    >> +{
    >> + struct protection_domain *domain = to_pdomain(dom);
    >> + struct io_pgtable_ops *ops = &domain->iop.iop.ops;
    >> +
    >> + if (ops->map)
    >
    > Not too critical since you're only moving existing code around, but is ops->map ever not set? Either way the check ends up looking rather out-of-place here :/
    >
    > It's not very clear what the original intent was - I do wonder whether it's supposed to be related to PAGE_MODE_NONE, but given that amd_iommu_map() has an explicit check and errors out early in that case, we'd never get here anyway. Possibly something to come back and clean up later?

    [ +Suravee ]

    According to what I see in the git log, the checks for ops->map (as well as ops->unmap) were relatively recently introduced by Suravee [1] in preparation to AMD IOMMU v2 page tables [2]. Since I do not know what he plans, I prefer not to touch this code.

    [1] https://lore.kernel.org/linux-iommu/20200923101442.73157-13-suravee.suthikulpanit@amd.com/
    [2] https://lore.kernel.org/linux-iommu/20200923101442.73157-1-suravee.suthikulpanit@amd.com/
    \
     
     \ /
      Last update: 2021-06-15 20:51    [W:2.123 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site