lkml.org 
[lkml]   [2019]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v7 14/23] iommu/smmuv3: Implement cache_invalidate
From
Date
On 08/04/2019 13:19, Eric Auger wrote:
> Implement domain-selective and page-selective IOTLB invalidations.
>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>
> ---
> v6 -> v7
> - check the uapi version
>
> v3 -> v4:
> - adapt to changes in the uapi
> - add support for leaf parameter
> - do not use arm_smmu_tlb_inv_range_nosync or arm_smmu_tlb_inv_context
> anymore
>
> v2 -> v3:
> - replace __arm_smmu_tlb_sync by arm_smmu_cmdq_issue_sync
>
> v1 -> v2:
> - properly pass the asid
> ---
> drivers/iommu/arm-smmu-v3.c | 60 +++++++++++++++++++++++++++++++++++++
> 1 file changed, 60 insertions(+)
>
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 1486baf53425..4366921d8318 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -2326,6 +2326,65 @@ static void arm_smmu_detach_pasid_table(struct iommu_domain *domain)
> mutex_unlock(&smmu_domain->init_mutex);
> }
>
> +static int
> +arm_smmu_cache_invalidate(struct iommu_domain *domain, struct device *dev,
> + struct iommu_cache_invalidate_info *inv_info)
> +{
> + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> + struct arm_smmu_device *smmu = smmu_domain->smmu;
> +
> + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
> + return -EINVAL;
> +
> + if (!smmu)
> + return -EINVAL;
> +
> + if (inv_info->version != IOMMU_CACHE_INVALIDATE_INFO_VERSION_1)
> + return -EINVAL;
> +
> + if (inv_info->cache & IOMMU_CACHE_INV_TYPE_IOTLB) {
> + if (inv_info->granularity == IOMMU_INV_GRANU_PASID) {
> + struct arm_smmu_cmdq_ent cmd = {
> + .opcode = CMDQ_OP_TLBI_NH_ASID,
> + .tlbi = {
> + .vmid = smmu_domain->s2_cfg.vmid,
> + .asid = inv_info->pasid,
> + },
> + };
> +
> + arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> + arm_smmu_cmdq_issue_sync(smmu);

I'd much rather make arm_smmu_tlb_inv_context() understand nested
domains than open-code commands all over the place.

> +
> + } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR) {
> + struct iommu_inv_addr_info *info = &inv_info->addr_info;
> + size_t size = info->nb_granules * info->granule_size;
> + bool leaf = info->flags & IOMMU_INV_ADDR_FLAGS_LEAF;
> + struct arm_smmu_cmdq_ent cmd = {
> + .opcode = CMDQ_OP_TLBI_NH_VA,
> + .tlbi = {
> + .addr = info->addr,
> + .vmid = smmu_domain->s2_cfg.vmid,
> + .asid = info->pasid,
> + .leaf = leaf,
> + },
> + };
> +
> + do {
> + arm_smmu_cmdq_issue_cmd(smmu, &cmd);
> + cmd.tlbi.addr += info->granule_size;
> + } while (size -= info->granule_size);
> + arm_smmu_cmdq_issue_sync(smmu);

An this in particular I would really like to go all the way through
io_pgtable_tlb_add_flush()/io_pgtable_sync() if at all possible. Hooking
up range-based invalidations is going to be a massive headache if the
abstraction isn't solid.

Robin.

> + } else {
> + return -EINVAL;
> + }
> + }
> + if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID ||
> + inv_info->cache & IOMMU_CACHE_INV_TYPE_DEV_IOTLB) {
> + return -ENOENT;
> + }
> + return 0;
> +}
> +
> static struct iommu_ops arm_smmu_ops = {
> .capable = arm_smmu_capable,
> .domain_alloc = arm_smmu_domain_alloc,
> @@ -2346,6 +2405,7 @@ static struct iommu_ops arm_smmu_ops = {
> .put_resv_regions = arm_smmu_put_resv_regions,
> .attach_pasid_table = arm_smmu_attach_pasid_table,
> .detach_pasid_table = arm_smmu_detach_pasid_table,
> + .cache_invalidate = arm_smmu_cache_invalidate,
> .pgsize_bitmap = -1UL, /* Restricted during device attach */
> };
>
>

\
 
 \ /
  Last update: 2019-05-08 17:02    [W:0.531 / U:24.408 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site