lkml.org 
[lkml]   [2020]   [Nov]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 2/2] iommu/iova: Free global iova rcache on iova alloc failure
From
Date
On 2020-09-30 08:44, vjitta@codeaurora.org wrote:
> From: Vijayanand Jitta <vjitta@codeaurora.org>
>
> When ever an iova alloc request fails we free the iova
> ranges present in the percpu iova rcaches and then retry
> but the global iova rcache is not freed as a result we could
> still see iova alloc failure even after retry as global
> rcache is holding the iova's which can cause fragmentation.
> So, free the global iova rcache as well and then go for the
> retry.

This looks reasonable to me - it's mildly annoying that we end up with
so many similar-looking functions, but the necessary differences are
right down in the middle of the loops so nothing can reasonably be
factored out :(

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

> Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org>
> ---
> drivers/iommu/iova.c | 23 +++++++++++++++++++++++
> 1 file changed, 23 insertions(+)
>
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index c3a1a8e..faf9b13 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -25,6 +25,7 @@ static void init_iova_rcaches(struct iova_domain *iovad);
> static void free_iova_rcaches(struct iova_domain *iovad);
> static void fq_destroy_all_entries(struct iova_domain *iovad);
> static void fq_flush_timeout(struct timer_list *t);
> +static void free_global_cached_iovas(struct iova_domain *iovad);
>
> void
> init_iova_domain(struct iova_domain *iovad, unsigned long granule,
> @@ -442,6 +443,7 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
> flush_rcache = false;
> for_each_online_cpu(cpu)
> free_cpu_cached_iovas(cpu, iovad);
> + free_global_cached_iovas(iovad);
> goto retry;
> }
>
> @@ -1057,5 +1059,26 @@ void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad)
> }
> }
>
> +/*
> + * free all the IOVA ranges of global cache
> + */
> +static void free_global_cached_iovas(struct iova_domain *iovad)
> +{
> + struct iova_rcache *rcache;
> + unsigned long flags;
> + int i, j;
> +
> + for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
> + rcache = &iovad->rcaches[i];
> + spin_lock_irqsave(&rcache->lock, flags);
> + for (j = 0; j < rcache->depot_size; ++j) {
> + iova_magazine_free_pfns(rcache->depot[j], iovad);
> + iova_magazine_free(rcache->depot[j]);
> + rcache->depot[j] = NULL;
> + }
> + rcache->depot_size = 0;
> + spin_unlock_irqrestore(&rcache->lock, flags);
> + }
> +}
> MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>");
> MODULE_LICENSE("GPL");
>

\
 
 \ /
  Last update: 2020-11-03 13:36    [W:0.095 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site