lkml.org 
[lkml]   [2020]   [Jun]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep annotations
    On Fri, Jun 19, 2020 at 10:10 PM Jerome Glisse <jglisse@redhat.com> wrote:
    >
    > On Fri, Jun 19, 2020 at 03:18:49PM -0300, Jason Gunthorpe wrote:
    > > On Fri, Jun 19, 2020 at 02:09:35PM -0400, Jerome Glisse wrote:
    > > > On Fri, Jun 19, 2020 at 02:23:08PM -0300, Jason Gunthorpe wrote:
    > > > > On Fri, Jun 19, 2020 at 06:19:41PM +0200, Daniel Vetter wrote:
    > > > >
    > > > > > The madness is only that device B's mmu notifier might need to wait
    > > > > > for fence_B so that the dma operation finishes. Which in turn has to
    > > > > > wait for device A to finish first.
    > > > >
    > > > > So, it sound, fundamentally you've got this graph of operations across
    > > > > an unknown set of drivers and the kernel cannot insert itself in
    > > > > dma_fence hand offs to re-validate any of the buffers involved?
    > > > > Buffers which by definition cannot be touched by the hardware yet.
    > > > >
    > > > > That really is a pretty horrible place to end up..
    > > > >
    > > > > Pinning really is right answer for this kind of work flow. I think
    > > > > converting pinning to notifers should not be done unless notifier
    > > > > invalidation is relatively bounded.
    > > > >
    > > > > I know people like notifiers because they give a bit nicer performance
    > > > > in some happy cases, but this cripples all the bad cases..
    > > > >
    > > > > If pinning doesn't work for some reason maybe we should address that?
    > > >
    > > > Note that the dma fence is only true for user ptr buffer which predate
    > > > any HMM work and thus were using mmu notifier already. You need the
    > > > mmu notifier there because of fork and other corner cases.
    > >
    > > I wonder if we should try to fix the fork case more directly - RDMA
    > > has this same problem and added MADV_DONTFORK a long time ago as a
    > > hacky way to deal with it.
    > >
    > > Some crazy page pin that resolved COW in a way that always kept the
    > > physical memory with the mm that initiated the pin?
    >
    > Just no way to deal with it easily, i thought about forcing the
    > anon_vma (page->mapping for anonymous page) to the anon_vma that
    > belongs to the vma against which the GUP was done but it would
    > break things if page is already in other branch of a fork tree.
    > Also this forbid fast GUP.
    >
    > Quite frankly the fork was not the main motivating factor. GPU
    > can pin potentialy GBytes of memory thus we wanted to be able
    > to release it but since Michal changes to reclaim code this is
    > no longer effective.

    What where how? My patch to annote reclaim paths with mmu notifier
    possibility just landed in -mm, so if direct reclaim can't reclaim mmu
    notifier'ed stuff anymore we need to know.

    Also this would resolve the entire pain we're discussing in this
    thread about dma_fence_wait deadlocking against anything that's not
    GFP_ATOMIC ...
    -Daniel

    >
    > User buffer should never end up in those weird corner case, iirc
    > the first usage was for xorg exa texture upload, then generalize
    > to texture upload in mesa and latter on to more upload cases
    > (vertices, ...). At least this is what i remember today. So in
    > those cases we do not expect fork, splice, mremap, mprotect, ...
    >
    > Maybe we can audit how user ptr buffer are use today and see if
    > we can define a usage pattern that would allow to cut corner in
    > kernel. For instance we could use mmu notifier just to block CPU
    > pte update while we do GUP and thus never wait on dma fence.
    >
    > Then GPU driver just keep the GUP pin around until they are done
    > with the page. They can also use the mmu notifier to keep a flag
    > so that the driver know if it needs to redo a GUP ie:
    >
    > The notifier path:
    > GPU_mmu_notifier_start_callback(range)
    > gpu_lock_cpu_pagetable(range)
    > for_each_bo_in(bo, range) {
    > bo->need_gup = true;
    > }
    > gpu_unlock_cpu_pagetable(range)
    >
    > GPU_validate_buffer_pages(bo)
    > if (!bo->need_gup)
    > return;
    > put_pages(bo->pages);
    > range = bo_vaddr_range(bo)
    > gpu_lock_cpu_pagetable(range)
    > GUP(bo->pages, range)
    > gpu_unlock_cpu_pagetable(range)
    >
    >
    > Depending on how user_ptr are use today this could work.
    >
    >
    > > (isn't this broken for O_DIRECT as well anyhow?)
    >
    > Yes it can in theory, if you have an application that does O_DIRECT
    > and fork concurrently (ie O_DIRECT in one thread and fork in another).
    > Note that O_DIRECT after fork is fine, it is an issue only if GUP_fast
    > was able to lookup a page with write permission before fork had the
    > chance to update it to read only for COW.
    >
    > But doing O_DIRECT (or anything that use GUP fast) in one thread and
    > fork in another is inherently broken ie there is no way to fix it.
    >
    > See 17839856fd588f4ab6b789f482ed3ffd7c403e1f
    >
    > >
    > > How does mmu_notifiers help the fork case anyhow? Block fork from
    > > progressing?
    >
    > It enforce ordering between fork and GUP, if fork is first it blocks
    > GUP and if forks is last then fork waits on GUP and then user buffer
    > get invalidated.
    >
    > >
    > > > I probably need to warn AMD folks again that using HMM means that you
    > > > must be able to update the GPU page table asynchronously without
    > > > fence wait.
    > >
    > > It is kind of unrelated to HMM, it just shouldn't be using mmu
    > > notifiers to replace page pinning..
    >
    > Well my POV is that if you abide by rules HMM defined then you do
    > not need to pin pages. The rule is asynchronous device page table
    > update.
    >
    > Pinning pages is problematic it blocks many core mm features and
    > it is just bad all around. Also it is inherently broken in front
    > of fork/mremap/splice/...
    >
    > Cheers,
    > Jérôme
    >


    --
    Daniel Vetter
    Software Engineer, Intel Corporation
    http://blog.ffwll.ch

    \
     
     \ /
      Last update: 2020-06-19 22:44    [W:4.169 / U:1.500 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site