lkml.org 
[lkml]   [2020]   [Jun]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep annotations
    On Fri, Jun 19, 2020 at 06:19:41PM +0200, Daniel Vetter wrote:

    > The madness is only that device B's mmu notifier might need to wait
    > for fence_B so that the dma operation finishes. Which in turn has to
    > wait for device A to finish first.

    So, it sound, fundamentally you've got this graph of operations across
    an unknown set of drivers and the kernel cannot insert itself in
    dma_fence hand offs to re-validate any of the buffers involved?
    Buffers which by definition cannot be touched by the hardware yet.

    That really is a pretty horrible place to end up..

    Pinning really is right answer for this kind of work flow. I think
    converting pinning to notifers should not be done unless notifier
    invalidation is relatively bounded.

    I know people like notifiers because they give a bit nicer performance
    in some happy cases, but this cripples all the bad cases..

    If pinning doesn't work for some reason maybe we should address that?

    > Full disclosure: We are aware that we've designed ourselves into an
    > impressive corner here, and there's lots of talks going on about
    > untangling the dma synchronization from the memory management
    > completely. But

    I think the documenting is really important: only GPU should be using
    this stuff and driving notifiers this way. Complete NO for any
    totally-not-a-GPU things in drivers/accel for sure.

    Jason

    \
     
     \ /
      Last update: 2020-06-19 19:25    [W:2.265 / U:0.208 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site