lkml.org 
[lkml]   [2011]   [Jul]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v2 07/19] OpenRISC: DMA
Date
On Saturday 02 July 2011, Jonas Bonn wrote:
> +void *or1k_dma_alloc_coherent(struct device *dev, size_t size,
> + dma_addr_t *dma_handle, gfp_t flag)
> +{
> + int order;
> + unsigned long page, va;
> + pgprot_t prot;
> + struct vm_struct *area;
> +
> + /* Only allocate page size areas. */
> + size = PAGE_ALIGN(size);
> + order = get_order(size);
> +
> + page = __get_free_pages(flag, order);
> + if (!page)
> + return NULL;
> +
> + /* Allocate some common virtual space to map the new pages. */
> + area = get_vm_area(size, VM_ALLOC);
> + if (area == NULL) {
> + free_pages(page, order);
> + return NULL;
> + }
> + va = (unsigned long)area->addr;
> +
> + /* This gives us the real physical address of the first page. */
> + *dma_handle = __pa(page);
> +
> + prot = PAGE_KERNEL_NOCACHE;
> +
> + /* This isn't so much ioremap as just simply 'remap' */
> + if (ioremap_page_range(va, va + size, *dma_handle, prot)) {
> + vfree(area->addr);
> + return NULL;
> + }
> +
> + return (void *)va;
> +}

This will result in having conflicting mappings, one with and another
without caching, which a lot of CPU architectures don't like. Are you
sure that you can handle this with or1k?

I think at the very least you will need to flush the cache for
the linear mapping, to avoid writing back dirty cache lines over
the DMA buffer.

You can save a little memory by using alloc_pages_exact instead of
get_free_pages, which always gives you a power-of-two size.

Also, isn't get_vm_area+ioremap_page_range the same as ioremap
on or1k?

In the case that ioremap_page_ranges fails, I think you have a
memory leak, or worse, because areas is not backed by the pages at that
moment.

Arnd


\
 
 \ /
  Last update: 2011-07-05 17:39    [W:1.282 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site