lkml.org 
[lkml]   [2021]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[RFC 6/7] iommu: Add KVA map API
Date
This patch adds KVA map API. It enforces KVA address range checking and
other potential sanity checks. Currently, only the direct map range is
checked.
For trusted devices, this API returns immediately after the above sanity
check. For untrusted devices, this API serves as a simple wrapper around
IOMMU map/unmap APIs. 
OPEN: Alignment at the minimum page size is required, not as rich and
flexible as DMA-APIs.

Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
---
drivers/iommu/iommu.c | 57 +++++++++++++++++++++++++++++++++++++++++++
include/linux/iommu.h | 5 ++++
2 files changed, 62 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index acfdcd7ebd6a..45ba55941209 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2490,6 +2490,63 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
}
EXPORT_SYMBOL_GPL(iommu_map);

+/*
+ * REVISIT: This might not be sufficient. Could also check permission match,
+ * exclude kernel text, etc.
+ */
+static inline bool is_kernel_direct_map(unsigned long start, phys_addr_t size)
+{
+ return (start >= PAGE_OFFSET) && ((start + size) <= VMALLOC_START);
+}
+
+/**
+ * @brief Map kernel virtual address for DMA remap. DMA request with
+ * domain's default PASID will target kernel virtual address space.
+ *
+ * @param domain Domain contains the PASID
+ * @param page Kernel virtual address
+ * @param size Size to map
+ * @param prot Permissions
+ * @return int 0 on success or error code
+ */
+int iommu_map_kva(struct iommu_domain *domain, struct page *page,
+ size_t size, int prot)
+{
+ phys_addr_t phys = page_to_phys(page);
+ void *kva = phys_to_virt(phys);
+
+ /*
+ * TODO: Limit DMA to kernel direct mapping only, avoid dynamic range
+ * until we have mmu_notifier for making IOTLB coherent with CPU.
+ */
+ if (!is_kernel_direct_map((unsigned long)kva, size))
+ return -EINVAL;
+ /* KVA domain type indicates shared CPU page table, skip building
+ * IOMMU page tables. This is the fast mode where only sanity check
+ * is performed.
+ */
+ if (domain->type == IOMMU_DOMAIN_KVA)
+ return 0;
+
+ return iommu_map(domain, (unsigned long)kva, phys, size, prot);
+}
+EXPORT_SYMBOL_GPL(iommu_map_kva);
+
+int iommu_unmap_kva(struct iommu_domain *domain, void *kva,
+ size_t size)
+{
+ if (!is_kernel_direct_map((unsigned long)kva, size))
+ return -EINVAL;
+
+ if (domain->type == IOMMU_DOMAIN_KVA) {
+ pr_debug_ratelimited("unmap kva skipped %llx", (u64)kva);
+ return 0;
+ }
+ /* REVISIT: do we need a fast version? */
+ return iommu_unmap(domain, (unsigned long)kva, size);
+}
+EXPORT_SYMBOL_GPL(iommu_unmap_kva);
+
int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
{
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index cd8225f6bc23..c0fac050ca57 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -427,6 +427,11 @@ extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
extern size_t iommu_map_sg_atomic(struct iommu_domain *domain,
unsigned long iova, struct scatterlist *sg,
unsigned int nents, int prot);
+extern int iommu_map_kva(struct iommu_domain *domain,
+ struct page *page, size_t size, int prot);
+extern int iommu_unmap_kva(struct iommu_domain *domain,
+ void *kva, size_t size);
+
extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova);
extern void iommu_set_fault_handler(struct iommu_domain *domain,
iommu_fault_handler_t handler, void *token);
--
2.25.1
\
 
 \ /
  Last update: 2021-09-22 07:14    [W:0.450 / U:0.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site