lkml.org 
[lkml]   [2008]   [Jan]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Subject[PATCH][POWERPC] Workaround for iommu page alignment (#2)
    From
    Date
    powerpc: Workaround for iommu page alignment

    Our iommu page size is currently always 4K. That means with our current
    code, drivers may do a dma_map_sg() of a 64K page and obtain a dma_addr_t
    that is only 4K aligned.

    This works fine in most cases except some infiniband HW it seems, where
    they tell the HW about the page size and it ignores the low bits of the
    DMA address.

    This works around it by making our IOMMU code enforce a PAGE_SIZE alignment
    for mappings of objects that are page aligned in the first place and whose
    size is larger or equal to a page.

    Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    ---

    And this version actually does what the comment says (I had forgotten
    to quilt ref... a common mistake).

    Index: linux-work/arch/powerpc/kernel/iommu.c
    ===================================================================
    --- linux-work.orig/arch/powerpc/kernel/iommu.c 2007-12-21 10:39:39.000000000 +1100
    +++ linux-work/arch/powerpc/kernel/iommu.c 2007-12-21 10:48:12.000000000 +1100
    @@ -278,6 +278,7 @@ int iommu_map_sg(struct iommu_table *tbl
    unsigned long flags;
    struct scatterlist *s, *outs, *segstart;
    int outcount, incount, i;
    + unsigned int align;
    unsigned long handle;

    BUG_ON(direction == DMA_NONE);
    @@ -309,7 +310,12 @@ int iommu_map_sg(struct iommu_table *tbl
    /* Allocate iommu entries for that segment */
    vaddr = (unsigned long) sg_virt(s);
    npages = iommu_num_pages(vaddr, slen);
    - entry = iommu_range_alloc(tbl, npages, &handle, mask >> IOMMU_PAGE_SHIFT, 0);
    + align = 0;
    + if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && slen >= PAGE_SIZE &&
    + (vaddr & ~PAGE_MASK) == 0)
    + align = PAGE_SHIFT - IOMMU_PAGE_SHIFT;
    + entry = iommu_range_alloc(tbl, npages, &handle,
    + mask >> IOMMU_PAGE_SHIFT, align);

    DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen);

    @@ -572,7 +578,7 @@ dma_addr_t iommu_map_single(struct iommu
    {
    dma_addr_t dma_handle = DMA_ERROR_CODE;
    unsigned long uaddr;
    - unsigned int npages;
    + unsigned int npages, align;

    BUG_ON(direction == DMA_NONE);

    @@ -580,8 +586,13 @@ dma_addr_t iommu_map_single(struct iommu
    npages = iommu_num_pages(uaddr, size);

    if (tbl) {
    + align = 0;
    + if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && size >= PAGE_SIZE &&
    + ((unsigned long)vaddr & ~PAGE_MASK) == 0)
    + align = PAGE_SHIFT - IOMMU_PAGE_SHIFT;
    +
    dma_handle = iommu_alloc(tbl, vaddr, npages, direction,
    - mask >> IOMMU_PAGE_SHIFT, 0);
    + mask >> IOMMU_PAGE_SHIFT, align);
    if (dma_handle == DMA_ERROR_CODE) {
    if (printk_ratelimit()) {
    printk(KERN_INFO "iommu_alloc failed, "



    \
     
     \ /
      Last update: 2008-01-08 00:37    [W:0.074 / U:1.968 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site