lkml.org 
[lkml]   [2019]   [Jul]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps
On Thu, Jul 25, 2019 at 10:25:50AM -0400, Andrew F. Davis wrote:
> On 7/25/19 10:11 AM, Christoph Hellwig wrote:
> > On Thu, Jul 25, 2019 at 10:10:08AM -0400, Andrew F. Davis wrote:
> >> Pages yes, but not "normal" pages from the kernel managed area.
> >> page_to_pfn() will return bad values on the pages returned by this
> >> allocator and so will any of the kernel sync/map functions. Therefor
> >> those operations cannot be common and need special per-heap handling.
> >
> > Well, that means this thing is buggy and abuses the scatterlist API
> > and we can't merge it anyway, so it is irrelevant.
> >
>
> Since when do scatterlists need to only have kernel virtual backed
> memory pages? Device memory is stored in scatterlists and
> dma_sync_sg_for_* would fail just the same when the cache ops were
> attempted.

I'm not sure what you mean with virtual backed memory pages, as we
don't really have that concept.

But a page in the scatterlist needs to be able to be used everywhere
we'd normally use a page, e.g. page_to_phys, page_to_pfn, kmap,
page_address (if !highmem) as consumers including the dma mapping
interface do all that.

If you want to dma map memory that does not have page backing you
need to use dma_map_resource.

\
 
 \ /
  Last update: 2019-07-25 16:31    [W:0.054 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site