lkml.org 
[lkml]   [2019]   [Oct]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
    On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
    > On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
    > > On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
    > > > On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
    > > > > On 10/17/19 3:14 PM, John Stultz wrote:
    > > > > > But if the objection stands, do you have a proposal for an alternative
    > > > > > way to enumerate a subset of CMA heaps?
    > > > > >
    > > > > When in staging ION had to reach into the CMA framework as the other
    > > > > direction would not be allowed, so cma_for_each_area() was added. If
    > > > > DMA-BUF heaps is not in staging then we can do the opposite, and have
    > > > > the CMA framework register heaps itself using our framework. That way
    > > > > the CMA system could decide what areas to export or not (maybe based on
    > > > > a DT property or similar).
    > > >
    > > > Ok. Though the CMA core doesn't have much sense of DT details either,
    > > > so it would probably have to be done in the reserved_mem logic, which
    > > > doesn't feel right to me.
    > > >
    > > > I'd probably guess we should have some sort of dt binding to describe
    > > > a dmabuf cma heap and from that node link to a CMA node via a
    > > > memory-region phandle. Along with maybe the default heap as well? Not
    > > > eager to get into another binding review cycle, and I'm not sure what
    > > > non-DT systems will do yet, but I'll take a shot at it and iterate.
    > > >
    > > > > The end result is the same so we can make this change later (it has to
    > > > > come after DMA-BUF heaps is in anyway).
    > > >
    > > > Well, I'm hesitant to merge code that exposes all the CMA heaps and
    > > > then add patches that becomes more selective, should anyone depend on
    > > > the initial behavior. :/
    > >
    > > How about only auto-adding the system default CMA region (cma->name ==
    > > "reserved")?
    > >
    > > And/or the CMA auto-add could be behind a config option? It seems a
    > > shame to further delay this, and the CMA heap itself really is useful.
    > >
    > A bit of a detour, comming back to the issue why the following node
    > was not getting detected by the dma-buf heaps framework.
    >
    > reserved-memory {
    > #address-cells = <2>;
    > #size-cells = <2>;
    > ranges;
    >
    > display_reserved: framebuffer@60000000 {
    > compatible = "shared-dma-pool";
    > linux,cma-default;
    > reusable; <<<<<<<<<<<<-----------This was missing in our
    > earlier node
    > reg = <0 0x60000000 0 0x08000000>;
    > };

    Right. It has to be a CMA region for us to expose it from the cma heap.


    > With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
    >
    > [ 0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c

    Is the value 0x60000000 you're using something you just guessed at? It
    seems like the warning here is saying the pfn calculated from the base
    address isn't valid.

    thanks
    -john

    \
     
     \ /
      Last update: 2019-10-18 20:51    [W:4.249 / U:0.368 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site