lkml.org 
[lkml]   [2017]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [HMM v15 13/16] mm/hmm/migrate: new memory migration helper for use with device memory v2
On Tue, Jan 10, 2017 at 09:30:30AM -0600, David Nellans wrote:
>
> > You are mischaracterizing patch 11-14. Patch 11-12 adds new flags and
> > modify existing functions so that they can be share. Patch 13 implement
> > new migration helper while patch 14 optimize this new migration helper.
> >
> > hmm_migrate() is different from existing migration code because it works
> > on virtual address range of a process. Existing migration code works
> > from page. The only difference with existing code is that we collect
> > pages from virtual address and we allow use of dma engine to perform
> > copy.
> You're right, but why not just introduce a new general migration interface
> that works on vma range first, then case all the normal migration paths for
> HMM and then DMA? Being able to migrate based on vma range certainly
> makes user level control of memory placement/migration less complicated
> than page interfaces.

Special casing for HMM and DMA is already what those patches do. They share
as much code as doable with existing path. There is one thing to consider
here, because we are working on vma range we can easily optimize the unmap
step. This is why i do not share any of the outer loop with existing code.

Sharing more code than this will be counter-productive from optimization
point of view.

>
> > There is nothing that ie hmm_migrate() to HMM. If that make you feel better
> > i can drop the hmm_ prefix but i would need another name than migrate() as
> > it is already taken. I can probably name it vma_range_dma_migrate() or
> > something like that.
> >
> > The only think that is HMM specific in this code is understanding HMM special
> > page table entry and handling those. Such entry can only be migrated by DMA
> > and not by memcpy hence why i do not modify existing code to support those.
> I'd be happier if there was a vma_migrate proposed independently, I think
> it would find users outside the HMM sandbox. In the IBM migration case,
> they might want the vma interface but choose to use CPU based migration
> rather than this DMA interface, It certainly would make testing of the
> vma_migrate interface easier.

Like i said that code is not in HMM sandbox, it seats behind its own kernel
option and do not rely on any HMM thing beside hmm_pfn_t which is pfn with
a bunch of flags. The only difference with existing code is that it does
understand HMM CPU pte. It can easily be rename without hmm_ prefix if that
is what people want. The hmm_pfn_t is harder to replace as there isn't any-
thing that match the requirement (need few flags: DEVICE,MIGRATE,EMPTY,
UNADDRESSABLE).

The DMA is a callback function the caller of hmm_migrate() provide so you can
easily provide a callback that just do memcpy (well copy_highpage()). There
is no need to make any change. I can even provide a default CPU copy call-
back.

Cheers,
Jérôme

\
 
 \ /
  Last update: 2017-01-10 17:59    [W:1.050 / U:2.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site