Messages in this thread | | | From | Michael Kelley <> | Subject | RE: [PATCH v6 1/6] swiotlb: Fix double-allocation of slots due to broken alignment handling | Date | Mon, 18 Mar 2024 03:39:07 +0000 |
| |
From: Will Deacon <will@kernel.org> Sent: Friday, March 8, 2024 7:28 AM > > > Fix the problem by treating the allocation alignment separately to any > additional alignment requirements from the device, using the maximum > of the two as the stride to search the buffer slots and taking care > to ensure a minimum of page-alignment for buffers larger than a page. > > This also resolves swiotlb allocation failures occuring due to the > inclusion of ~PAGE_MASK in 'iotlb_align_mask' for large allocations and > resulting in alignment requirements exceeding swiotlb_max_mapping_size(). > > Fixes: bbb73a103fbb ("swiotlb: fix a braino in the alignment check fix") > Fixes: 0eee5ae10256 ("swiotlb: fix slot alignment checks") > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: Dexuan Cui <decui@microsoft.com> > Reviewed-by: Michael Kelley <mhklinux@outlook.com> > Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> > Tested-by: Nicolin Chen <nicolinc@nvidia.com> > Signed-off-by: Will Deacon <will@kernel.org> > --- > kernel/dma/swiotlb.c | 28 +++++++++++++++------------- > 1 file changed, 15 insertions(+), 13 deletions(-) > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index b079a9a8e087..2ec2cc81f1a2 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -982,7 +982,7 @@ static int swiotlb_search_pool_area(struct device > *dev, struct io_tlb_pool *pool > phys_to_dma_unencrypted(dev, pool->start) & boundary_mask; > unsigned long max_slots = get_max_slots(boundary_mask); > unsigned int iotlb_align_mask = > - dma_get_min_align_mask(dev) | alloc_align_mask; > + dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1); > unsigned int nslots = nr_slots(alloc_size), stride; > unsigned int offset = swiotlb_align_offset(dev, orig_addr); > unsigned int index, slots_checked, count = 0, i; > @@ -993,19 +993,18 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool > BUG_ON(!nslots); > BUG_ON(area_index >= pool->nareas); > > + /* > + * For mappings with an alignment requirement don't bother looping to > + * unaligned slots once we found an aligned one. > + */ > + stride = get_max_slots(max(alloc_align_mask, iotlb_align_mask)); > + > /* > * For allocations of PAGE_SIZE or larger only look for page aligned > * allocations. > */ > if (alloc_size >= PAGE_SIZE) > - iotlb_align_mask |= ~PAGE_MASK; > - iotlb_align_mask &= ~(IO_TLB_SIZE - 1); > - > - /* > - * For mappings with an alignment requirement don't bother looping to > - * unaligned slots once we found an aligned one. > - */ > - stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1; > + stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1); > > spin_lock_irqsave(&area->lock, flags); > if (unlikely(nslots > pool->area_nslabs - area->used)) > @@ -1015,11 +1014,14 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool > index = area->index; > > for (slots_checked = 0; slots_checked < pool->area_nslabs; ) { > - slot_index = slot_base + index; > + phys_addr_t tlb_addr; > > - if (orig_addr && > - (slot_addr(tbl_dma_addr, slot_index) & > - iotlb_align_mask) != (orig_addr & iotlb_align_mask)) { > + slot_index = slot_base + index; > + tlb_addr = slot_addr(tbl_dma_addr, slot_index); > + > + if ((tlb_addr & alloc_align_mask) || > + (orig_addr && (tlb_addr & iotlb_align_mask) != > + (orig_addr & iotlb_align_mask))) { > index = wrap_area_index(pool, index + 1); > slots_checked++; > continue; > --
Question for IOMMU folks: alloc_align_mask is set only in iommu_dma_map_page(), using the IOMMU granule size. Can the granule ever be larger than PAGE_SIZE? If so, swiotlb_search_pool_area() can fail to find slots even when the swiotlb is empty.
The failure happens when alloc_align_mask is larger than PAGE_SIZE and the alloc_size is the swiotlb max of 256 Kbytes (or even a bit smaller in some cases). The swiotlb memory pool is allocated in swiotlb_memblock_alloc() with PAGE_SIZE alignment. On x86/x64, if alloc_align_mask is 8191 and the pool start address is something like XXXX1000, slot 0 won't satisfy alloc_align_mask. Slot 1 satisfies alloc_align_mask, but has a size of 127 slots and can't fulfill a 256 Kbyte request. The problem repeats through the entire swiotlb and the allocation fails.
Updating swiotlb_memblock_alloc() to use an alignment of IO_TLB_SIZE * IO_TLB_SEGSIZE (i.e., 256 Kbytes) solves the problem for all viable configurations.
Michael
| |