lkml.org 
[lkml]   [2019]   [Jan]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma
    Date
    On Wed, Jan 30, 2019 at 03:43:32PM -0500, Jerome Glisse wrote:
    > On Wed, Jan 30, 2019 at 08:11:19PM +0000, Jason Gunthorpe wrote:
    > > On Wed, Jan 30, 2019 at 01:00:02PM -0700, Logan Gunthorpe wrote:
    > >
    > > > We never changed SGLs. We still use them to pass p2pdma pages, only we
    > > > need to be a bit careful where we send the entire SGL. I see no reason
    > > > why we can't continue to be careful once their in userspace if there's
    > > > something in GUP to deny them.
    > > >
    > > > It would be nice to have heterogeneous SGLs and it is something we
    > > > should work toward but in practice they aren't really necessary at the
    > > > moment.
    > >
    > > RDMA generally cannot cope well with an API that requires homogeneous
    > > SGLs.. User space can construct complex MRs (particularly with the
    > > proposed SGL MR flow) and we must marshal that into a single SGL or
    > > the drivers fall apart.
    > >
    > > Jerome explained that GPU is worse, a single VMA may have a random mix
    > > of CPU or device pages..
    > >
    > > This is a pretty big blocker that would have to somehow be fixed.
    >
    > Note that HMM takes care of that RDMA ODP with my ODP to HMM patch,
    > so what you get for an ODP umem is just a list of dma address you
    > can program your device to. The aim is to avoid the driver to care
    > about that. The access policy when the UMEM object is created by
    > userspace through verbs API should however ascertain that for mmap
    > of device file it is only creating a UMEM that is fully covered by
    > one and only one vma. GPU device driver will have one vma per logical
    > GPU object. I expect other kind of device do that same so that they
    > can match a vma to a unique object in their driver.

    A one VMA rule is not really workable.

    With ODP VMA boundaries can move around across the lifetime of the MR
    and we have no obvious way to fail anything if userpace puts a VMA
    boundary in the middle of an existing ODP MR address range.

    I think the HMM mirror API really needs to deal with this for the
    driver somehow.

    Jason

    \
     
     \ /
      Last update: 2019-01-30 21:51    [W:3.254 / U:0.056 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site