lkml.org 
[lkml]   [2021]   [Jul]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 4/7] iommu: Factor iommu_iotlb_gather_is_disjoint() out
From
Date
On 2021-07-13 10:41, Nadav Amit wrote:
> From: Nadav Amit <namit@vmware.com>
>
> Refactor iommu_iotlb_gather_add_page() and factor out the logic that
> detects whether IOTLB gather range and a new range are disjoint. To be
> used by the next patch that implements different gathering logic for
> AMD.
>
> Note that updating gather->pgsize unconditionally does not affect
> correctness as the function had (and has) an invariant, in which
> gather->pgsize always represents the flushing granularity of its range.
> Arguably, “size" should never be zero, but lets assume for the matter of
> discussion that it might.
>
> If "size" equals to "gather->pgsize", then the assignment in question
> has no impact.
>
> Otherwise, if "size" is non-zero, then iommu_iotlb_sync() would
> initialize the size and range (see iommu_iotlb_gather_init()), and the
> invariant is kept.
>
> Otherwise, "size" is zero, and "gather" already holds a range, so
> gather->pgsize is non-zero and (gather->pgsize && gather->pgsize !=
> size) is true. Therefore, again, iommu_iotlb_sync() would be called and
> initialize the size.

With the caveat of one comment on the next patch...

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Jiajun Cao <caojiajun@vmware.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Lu Baolu <baolu.lu@linux.intel.com>
> Cc: iommu@lists.linux-foundation.org
> Cc: linux-kernel@vger.kernel.org>
> Acked-by: Will Deacon <will@kernel.org>
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
> include/linux/iommu.h | 34 ++++++++++++++++++++++++++--------
> 1 file changed, 26 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index e554871db46f..979a5ceeea55 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -497,6 +497,28 @@ static inline void iommu_iotlb_sync(struct iommu_domain *domain,
> iommu_iotlb_gather_init(iotlb_gather);
> }
>
> +/**
> + * iommu_iotlb_gather_is_disjoint - Checks whether a new range is disjoint
> + *
> + * @gather: TLB gather data
> + * @iova: start of page to invalidate
> + * @size: size of page to invalidate
> + *
> + * Helper for IOMMU drivers to check whether a new range and the gathered range
> + * are disjoint. For many IOMMUs, flushing the IOMMU in this case is better
> + * than merging the two, which might lead to unnecessary invalidations.
> + */
> +static inline
> +bool iommu_iotlb_gather_is_disjoint(struct iommu_iotlb_gather *gather,
> + unsigned long iova, size_t size)
> +{
> + unsigned long start = iova, end = start + size - 1;
> +
> + return gather->end != 0 &&
> + (end + 1 < gather->start || start > gather->end + 1);
> +}
> +
> +
> /**
> * iommu_iotlb_gather_add_range - Gather for address-based TLB invalidation
> * @gather: TLB gather data
> @@ -533,20 +555,16 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
> struct iommu_iotlb_gather *gather,
> unsigned long iova, size_t size)
> {
> - unsigned long start = iova, end = start + size - 1;
> -
> /*
> * If the new page is disjoint from the current range or is mapped at
> * a different granularity, then sync the TLB so that the gather
> * structure can be rewritten.
> */
> - if (gather->pgsize != size ||
> - end + 1 < gather->start || start > gather->end + 1) {
> - if (gather->pgsize)
> - iommu_iotlb_sync(domain, gather);
> - gather->pgsize = size;
> - }
> + if ((gather->pgsize && gather->pgsize != size) ||
> + iommu_iotlb_gather_is_disjoint(gather, iova, size))
> + iommu_iotlb_sync(domain, gather);
>
> + gather->pgsize = size;
> iommu_iotlb_gather_add_range(gather, iova, size);
> }
>
>

\
 
 \ /
  Last update: 2021-07-13 20:26    [W:0.117 / U:0.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site