lkml.org 
[lkml]   [2009]   [Aug]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC] respect the referenced bit of KVM guest pages?
From
On Wed, Aug 19, 2009 at 10:24 PM, Wu Fengguang<fengguang.wu@intel.com> wrote:
> On Wed, Aug 19, 2009 at 08:25:56PM +0800, Minchan Kim wrote:
>> On Wed, Aug 19, 2009 at 9:10 PM, Wu Fengguang<fengguang.wu@intel.com> wrote:
>> > On Wed, Aug 19, 2009 at 08:05:19PM +0800, KOSAKI Motohiro wrote:
>> >> >> page_referenced_file?
>> >> >> I think we should change page_referenced().
>> >> >
>> >> > Yeah, good catch.
>> >> >
>> >> >>
>> >> >> Instead, How about this?
>> >> >> ==============================================
>> >> >>
>> >> >> Subject: [PATCH] mm: stop circulating of referenced mlocked pages
>> >> >>
>> >> >> Currently, mlock() systemcall doesn't gurantee to mark the page PG_Mlocked
>> >> >
>> >> >                                                    mark PG_mlocked
>> >> >
>> >> >> because some race prevent page grabbing.
>> >> >> In that case, instead vmscan move the page to unevictable lru.
>> >> >>
>> >> >> However, Recently Wu Fengguang pointed out current vmscan logic isn't so
>> >> >> efficient.
>> >> >> mlocked page can move circulatly active and inactive list because
>> >> >> vmscan check the page is referenced _before_ cull mlocked page.
>> >> >>
>> >> >> Plus, vmscan should mark PG_Mlocked when cull mlocked page.
>> >> >
>> >> >                           PG_mlocked
>> >> >
>> >> >> Otherwise vm stastics show strange number.
>> >> >>
>> >> >> This patch does that.
>> >> >
>> >> > Reviewed-by: Wu Fengguang <fengguang.wu@intel.com>
>> >>
>> >> Thanks.
>> >>
>> >>
>> >>
>> >> >> Index: b/mm/rmap.c
>> >> >> ===================================================================
>> >> >> --- a/mm/rmap.c       2009-08-18 19:48:14.000000000 +0900
>> >> >> +++ b/mm/rmap.c       2009-08-18 23:47:34.000000000 +0900
>> >> >> @@ -362,7 +362,9 @@ static int page_referenced_one(struct pa
>> >> >>        * unevictable list.
>> >> >>        */
>> >> >>       if (vma->vm_flags & VM_LOCKED) {
>> >> >> -             *mapcount = 1;  /* break early from loop */
>> >> >> +             *mapcount = 1;          /* break early from loop */
>> >> >> +             *vm_flags |= VM_LOCKED; /* for prevent to move active list */
>> >> >
>> >> >> +             try_set_page_mlocked(vma, page);
>> >> >
>> >> > That call is not absolutely necessary?
>> >>
>> >> Why? I haven't catch your point.
>> >
>> > Because we'll eventually hit another try_set_page_mlocked() when
>> > trying to unmap the page. Ie. duplicated with another call you added
>> > in this patch.
>>
>> Yes. we don't have to call it and we can make patch simple.
>> I already sent patch on yesterday.
>>
>> http://marc.info/?l=linux-mm&m=125059325722370&w=2
>>
>> I think It's more simple than KOSAKI's idea.
>> Is any problem in my patch ?
>
> No, IMHO your patch is simple and good, while KOSAKI's is more
> complete :)
>
> - the try_set_page_mlocked() rename is suitable
> - the call to try_set_page_mlocked() is necessary on try_to_unmap()

We don't need try_set_page_mlocked call in try_to_unmap.
That's because try_to_unmap_xxx will call try_to_mlock_page if the
page is included in any VM_LOCKED vma. Eventually, It can move
unevictable list.

> - the "if (VM_LOCKED) referenced = 0" in page_referenced() could
>  cover both active/inactive vmscan

ASAP we set PG_mlocked in page, we can save unnecessary vmscan cost from
active list to inactive list. But I think it's rare case so that there
would be few pages.
So I think that will be not big overhead.

As I know, Rescue by vmscan page losing the isolation race was the
Lee's design.
But as you pointed out, it have a bug that vmscan can't rescue the
page due to reach try_to_unmap.

So I think this approach is proper. :)

> I did like your proposed
>
>                if (sc->order <= PAGE_ALLOC_COSTLY_ORDER &&
> -                                       referenced && page_mapping_inuse(page))
> +                                       referenced && page_mapping_inuse(page)
> +                                       && !(vm_flags & VM_LOCKED))
>                        goto activate_locked;
>
> which looks more intuitive and less confusing.
>
> Thanks,
> Fengguang
>



--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2009-08-19 15:41    [W:2.071 / U:0.424 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site