lkml.org 
[lkml]   [2017]   [Apr]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory
On Tue, Apr 18, 2017 at 3:48 PM, Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
> On 18/04/17 04:28 PM, Dan Williams wrote:
>> Unlike the pci bus address offset case which I think is fundamental to
>> support since shipping archs do this today, I think it is ok to say
>> p2p is restricted to a single sgl that gets to talk to host memory or
>> a single device. That said, what's wrong with a p2p aware map_sg
>> implementation calling up to the host memory map_sg implementation on
>> a per sgl basis?
>
> I think Ben said they need mixed sgls and that is where this gets messy.
> I think I'd prefer this too given trying to enforce all sgs in a list to
> be one type or another could be quite difficult given the state of the
> scatterlist code.
>
>>> Also, what happens if p2p pages end up getting passed to a device that
>>> doesn't have the injected dma_ops?
>>
>> This goes back to limiting p2p to a single pci host bridge. If the p2p
>> capability is coordinated with the bridge rather than between the
>> individual devices then we have a central point to catch this case.
>
> Not really relevant. If these pages get to userspace (as people seem
> keen on doing) or a less than careful kernel driver they could easily
> get into the dma_map calls of devices that aren't even pci related (via
> an O_DIRECT operation on an incorrect file or something). The common
> code must reject these and can't rely on an injected dma op.

No, we can't do that at get_user_pages() time, it will always need to
be up to the device driver to fail dma that it can't perform.

\
 
 \ /
  Last update: 2017-04-19 00:52    [W:0.108 / U:6.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site