Messages in this thread | | | Date | Mon, 28 Nov 2016 09:57:51 -0700 | From | Jason Gunthorpe <> | Subject | Re: Enabling peer to peer device transactions for PCIe devices |
| |
On Sun, Nov 27, 2016 at 04:02:16PM +0200, Haggai Eran wrote:
> > Like in ODP, MMU notifiers/HMM are used to monitor for translation > > changes. If a change comes in the GPU driver checks if an executing > > command is touching those pages and blocks the MMU notifier until the > > command flushes, then unfaults the page (blocking future commands) and > > unblocks the mmu notifier.
> I think blocking mmu notifiers against something that is basically > controlled by user-space can be problematic. This can block things like > memory reclaim. If you have user-space access to the device's queues, > user-space can block the mmu notifier forever.
Right, I mentioned that..
> On PeerDirect, we have some kind of a middle-ground solution for pinning > GPU memory. We create a non-ODP MR pointing to VRAM but rely on > user-space and the GPU not to migrate it. If they do, the MR gets > destroyed immediately.
That sounds horrible. How can that possibly work? What if the MR is being used when the GPU decides to migrate? I would not support that upstream without a lot more explanation..
I know people don't like requiring new hardware, but in this case we really do need ODP hardware to get all the semantics people want..
> Another thing I think is that while HMM is good for user-space > applications, for kernel p2p use there is no need for that. Using
From what I understand we are not really talking about kernel p2p, everything proposed so far is being mediated by a userspace VMA, so I'd focus on making that work.
Jason
| |