lkml.org 
[lkml]   [2011]   [Jun]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous
On Fri, Jun 03, 2011 at 08:01:44AM +0900, Minchan Kim wrote:
> On Fri, Jun 3, 2011 at 7:32 AM, Andrea Arcangeli <aarcange@redhat.com> wrote:
> > On Fri, Jun 03, 2011 at 07:23:48AM +0900, Minchan Kim wrote:
> >> I mean we have more tail pages than head pages. So I think we are likely to
> >> meet tail pages. Of course, compared to all pages(page cache, anon and
> >> so on), compound pages would be very small percentage.
> >
> > Yes that's my point, that being a small percentage it's no big deal to
> > break the loop early.
>
> Indeed.
>
> >
> >> > isolated the head and it's useless to insist on more tail pages (at
> >> > least for large page size like on x86). Plus we've compaction so
> >>
> >> I can't understand your point. Could you elaborate it?
> >
> > What I meant is that if we already isolated the head page of the THP,
> > we don't need to try to free the tail pages and breaking the loop
> > early, will still give us a chance to free a whole 2m because we
> > isolated the head page (it'll involve some work and swapping but if it
> > was a compoundtranspage we're ok to break the loop and we're not
> > making the logic any worse). Provided the PMD_SIZE is quite large like
> > 2/4m...
>
> Do you want this? (it's almost pseudo-code)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 7a4469b..9d7609f 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1017,7 +1017,7 @@ static unsigned long isolate_lru_pages(unsigned
> long nr_to_scan,
> for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
> struct page *page;
> unsigned long pfn;
> - unsigned long end_pfn;
> + unsigned long start_pfn, end_pfn;
> unsigned long page_pfn;
> int zone_id;
>
> @@ -1057,9 +1057,9 @@ static unsigned long isolate_lru_pages(unsigned
> long nr_to_scan,
> */
> zone_id = page_zone_id(page);
> page_pfn = page_to_pfn(page);
> - pfn = page_pfn & ~((1 << order) - 1);
> + start_pfn = pfn = page_pfn & ~((1 << order) - 1);
> end_pfn = pfn + (1 << order);
> - for (; pfn < end_pfn; pfn++) {
> + while (pfn < end_pfn) {
> struct page *cursor_page;
>
> /* The target page is in the block, ignore it. */
> @@ -1086,17 +1086,25 @@ static unsigned long
> isolate_lru_pages(unsigned long nr_to_scan,
> break;
>
> if (__isolate_lru_page(cursor_page, mode, file) == 0) {
> + int isolated_pages;
> list_move(&cursor_page->lru, dst);
> mem_cgroup_del_lru(cursor_page);
> - nr_taken += hpage_nr_pages(page);
> + isolated_pages = hpage_nr_pages(page);
> + nr_taken += isolated_pages;
> + /* if we isolated pages enough, let's
> break early */
> + if (nr_taken > end_pfn - start_pfn)
> + break;
> + pfn += isolated_pages;

I think this condition is somewhat unlikely. We are scanning within
aligned blocks in this linear scanner. Huge pages are always aligned
so the only situation where we'll encounter a hugepage in the middle
of this linear scan is when the requested order is larger than a huge
page. This is exceptionally rare.

Did I miss something?

> nr_lumpy_taken++;
> if (PageDirty(cursor_page))
> nr_lumpy_dirty++;
> scan++;
> } else {
> /* the page is freed already. */
> - if (!page_count(cursor_page))
> + if (!page_count(cursor_page)) {
> + pfn++;
> continue;
> + }
> break;
> }
> }
>
> >
> > The only way this patch makes things worse is for slub order 3 in the
> > process of being freed. But tail pages aren't generally free anyway so
> > I doubt this really makes any difference plus the tail is getting
> > cleared as soon as the page reaches the buddy so it's probably
>
> Okay. Considering getting clear PG_tail as soon as slub order 3 is
> freed, it would be very rare case.
>
> > unnoticeable as this then makes a difference only during a race (plus
> > the tail page can't be isolated, only head page can be part of lrus
> > and only if they're THP).
> >
> >> > insisting and screwing lru ordering isn't worth it, better to be
> >> > permissive and abort... in fact I wouldn't dislike to remove the
> >> > entire lumpy logic when COMPACTION_BUILD is true, but that alters the
> >> > trace too...
> >>
> >> AFAIK, it's final destination to go as compaction will not break lru
> >> ordering if my patch(inorder-putback) is merged.
> >
> > Agreed. I like your patchset, sorry for not having reviewed it in
> > detail yet but there were other issues popping up in the last few
> > days.
>
> No problem. it's urgent than mine. :)
>

I'm going to take the opportunity to apologise for not reviewing that
series yet. I've been kept too busy with other bugs to set side the
few hours I need to review the series. I'm hoping to get to it this
week if everything goes well.

--
Mel Gorman
SUSE Labs


\
 
 \ /
  Last update: 2011-06-06 12:19    [W:0.151 / U:0.440 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site