lkml.org 
[lkml]   [2017]   [Apr]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory
    On Tue, Apr 18, 2017 at 3:56 PM, Logan Gunthorpe <logang@deltatee.com> wrote:
    >
    >
    > On 18/04/17 04:50 PM, Dan Williams wrote:
    >> On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe <logang@deltatee.com> wrote:
    >>>
    >>>
    >>> On 18/04/17 04:28 PM, Dan Williams wrote:
    >>>> Unlike the pci bus address offset case which I think is fundamental to
    >>>> support since shipping archs do this today, I think it is ok to say
    >>>> p2p is restricted to a single sgl that gets to talk to host memory or
    >>>> a single device. That said, what's wrong with a p2p aware map_sg
    >>>> implementation calling up to the host memory map_sg implementation on
    >>>> a per sgl basis?
    >>>
    >>> I think Ben said they need mixed sgls and that is where this gets messy.
    >>> I think I'd prefer this too given trying to enforce all sgs in a list to
    >>> be one type or another could be quite difficult given the state of the
    >>> scatterlist code.
    >>>
    >>>>> Also, what happens if p2p pages end up getting passed to a device that
    >>>>> doesn't have the injected dma_ops?
    >>>>
    >>>> This goes back to limiting p2p to a single pci host bridge. If the p2p
    >>>> capability is coordinated with the bridge rather than between the
    >>>> individual devices then we have a central point to catch this case.
    >>>
    >>> Not really relevant. If these pages get to userspace (as people seem
    >>> keen on doing) or a less than careful kernel driver they could easily
    >>> get into the dma_map calls of devices that aren't even pci related (via
    >>> an O_DIRECT operation on an incorrect file or something). The common
    >>> code must reject these and can't rely on an injected dma op.
    >>
    >> No, we can't do that at get_user_pages() time, it will always need to
    >> be up to the device driver to fail dma that it can't perform.
    >
    > I'm not sure I follow -- are you agreeing with me? The dma_map_* needs
    > to fail for any dma it cannot perform. Which means either all dma_ops
    > providers need to be p2p aware or this logic has to be in dma_map_*
    > itself. My point being: you can't rely on an injected dma_op for some
    > devices to handle the fail case globally.

    Ah, I see what you're saying now. Yes, we do need something that
    guarantees any dma mapping implementation that gets a struct page that
    it does now know how to translate properly fails the request.

    \
     
     \ /
      Last update: 2017-04-19 01:04    [W:3.460 / U:0.112 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site