lkml.org 
[lkml]   [2008]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 2/5] Reinstate ZERO_PAGE optimization in get_user_pages() and fix XIP
On Tue, Jun 24, 2008 at 1:27 AM, Linus Torvalds
<torvalds@linux-foundation.org> wrote:

> On the other hand, if you add a trace to the "use_zero_page()" function to
> print out the vm_flags and other details, that probably would help.

Let me know if you still want me to test this.

> That said, since the previous patch _did_ work, I bet that one that does
> both VM_LOCKED and VM_SHARED works too. There was a reason I wanted > to do that VM_SHARED test. I think the VM_SHARED test is sane, unlike the
> VM_LOCKED test (that is a fairly dubious hack for mlock).
> So here's the final version. I bet it works.

Yeh, it works great! Thank you.

Jeff.

> mm/memory.c | 23 +++++++++++++++++++++--
> 1 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 9aefaae..423e0e7 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1045,6 +1045,26 @@ no_page_table:
> return page;
> }
>
> +/* Can we do the FOLL_ANON optimization? */
> +static inline int use_zero_page(struct vm_area_struct *vma)
> +{
> + /*
> + * We don't want to optimize FOLL_ANON for make_pages_present()
> + * when it tries to page in a VM_LOCKED region. As to VM_SHARED,
> + * we want to get the page from the page tables to make sure
> + * that we serialize and update with any other user of that
> + * mapping.
> + */
> + if (vma->vm_flags & (VM_LOCKED | VM_SHARED))
> + return 0;
> + /*
> + * And if we have a fault or a nopfn routine, it's not an
> + * anonymous region.
> + */
> + return !vma->vm_ops ||
> + (!vma->vm_ops->fault && !vma->vm_ops->nopfn);
> +}
> +
> int get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
> unsigned long start, int len, int write, int force,
> struct page **pages, struct vm_area_struct **vmas)
> @@ -1119,8 +1139,7 @@ int get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
> foll_flags = FOLL_TOUCH;
> if (pages)
> foll_flags |= FOLL_GET;
> - if (!write && !(vma->vm_flags & VM_LOCKED) &&
> - (!vma->vm_ops || !vma->vm_ops->fault))
> + if (!write && use_zero_page(vma))
> foll_flags |= FOLL_ANON;
>
> do {
>


\
 
 \ /
  Last update: 2008-06-23 20:19    [W:0.083 / U:0.484 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site