Messages in this thread Patch in this message | | | Date | Sun, 9 May 2010 12:56:49 -0700 (PDT) | From | Linus Torvalds <> | Subject | Re: [PATCH 2/2] mm,migration: Fix race between shift_arg_pages and rmap_walk by guaranteeing rmap_walk finds PTEs created within the temporary stack |
| |
On Sun, 9 May 2010, Mel Gorman wrote: > > It turns out not to be easy to the preallocating of PUDs, PMDs and PTEs > move_page_tables() needs. To avoid overallocating, it has to follow the same > logic as move_page_tables duplicating some code in the process. The ugliest > aspect of all is passing those pre-allocated pages back into move_page_tables > where they need to be passed down to such functions as __pte_alloc. It turns > extremely messy.
Umm. What?
That's crazy talk. I'm not talking about preallocating stuff in order to pass it in to move_page_tables(). I'm talking about just _creating_ the dang page tables early - preallocating them IN THE PROCESS VM SPACE.
IOW, a patch like this (this is a pseudo-patch, totally untested, won't compile, yadda yadda - you need to actually make the people who call "move_page_tables()" call that prepare function first etc etc)
Yeah, if we care about holes in the page tables, we can certainly copy more of the move_page_tables() logic, but it certainly doesn't matter for execve(). This just makes sure that the destination page tables exist first.
Linus
--- mm/mremap.c | 22 +++++++++++++++++++++- 1 files changed, 21 insertions(+), 1 deletions(-)
diff --git a/mm/mremap.c b/mm/mremap.c index cde56ee..c14505c 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -128,6 +128,26 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, #define LATENCY_LIMIT (64 * PAGE_SIZE) +/* + * Preallocate the page tables, so that we can do the actual move + * without any allocations, and thus no error handling etc. + */ +int prepare_move_page_tables(struct vm_area_struct *vma, + unsigned long old_addr, struct vm_area_struct *new_vma, + unsigned long new_addr, unsigned long len) +{ + unsigned long end_addr = new_addr + len; + + while (new_addr < end_addr) { + pmd_t *new_pmd; + new_pmd = alloc_new_pmd(vma->vm_mm, new_addr); + if (!new_pmd) + return -ENOMEM; + new_addr = (new_addr + PMD_SIZE) & PMD_MASK; + } + return 0; +} + unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len) @@ -147,7 +167,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; - new_pmd = alloc_new_pmd(vma->vm_mm, new_addr); + new_pmd = get_old_pmd(vma->vm_mm, new_addr); if (!new_pmd) break; next = (new_addr + PMD_SIZE) & PMD_MASK;
| |