lkml.org 
[lkml]   [2011]   [Nov]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [patch 5/5]thp: split huge page if head page is isolated
So long contents.
Let's remove it.

On Thu, Nov 10, 2011 at 10:07:10AM +0800, Shaohua Li wrote:

<snip>

> > > Coudn't we make both sides good?
> > >
> > > Here is my quick patch.
> > > How about this?
> > > It doesn't split THPs in page_list but still reclaims non-THPs so
> > > I think it doesn't changed old behavior a lot.
> > I like this idea, will do some test soon.
> hmm, this doesn't work as expected. The putback_lru_page() messes lru.
> This isn't a problem if the page will be written since
> rotate_reclaimable_page() will fix the order. I got worse data than my
> v2 patch, eg, more thp_fallbacks, mess lru order, more pages are
> scanned. We could add something like putback_lru_page_tail, but I'm not

Hmm, It's not LRU mess problem. but it's just guessing and you might be right
because you have a workload and can test it.

My guessing is that cull_mlocked reset synchronus page reclaim.
Could you test this patch, again?

And, if the problem cause by LRU mess, I think it is valuable with adding putback_lru_page_tail
because thp added lru_add_page_tail, too.

Thanks!

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b55699c..e2c84c2 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -767,6 +767,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
unsigned long nr_dirty = 0;
unsigned long nr_congested = 0;
unsigned long nr_reclaimed = 0;
+ bool split_thp = false;
+ bool swapout_thp = false;

cond_resched();

@@ -784,6 +786,14 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (!trylock_page(page))
goto keep;

+ /*
+ * If we already swap out a THP, we don't want to
+ * split THPs any more. Let's wait until dirty a thp page
+ * to be written into swap device
+ */
+ if (unlikely(swapout_thp && PageTransHuge(page)))
+ goto pass_thp;
+
VM_BUG_ON(PageActive(page));
VM_BUG_ON(page_zone(page) != zone);

@@ -838,6 +848,12 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (PageAnon(page) && !PageSwapCache(page)) {
if (!(sc->gfp_mask & __GFP_IO))
goto keep_locked;
+ if (unlikely(PageTransHuge(page)))
+ if (unlikely(split_huge_page_list(page,
+ page_list)))
+ goto activate_locked;
+ else
+ split_thp = true;
if (!add_to_swap(page))
goto activate_locked;
may_enter_fs = 1;
@@ -880,6 +896,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
case PAGE_ACTIVATE:
goto activate_locked;
case PAGE_SUCCESS:
+ if (split_thp)
+ swapout_thp = true;
if (PageWriteback(page))
goto keep_lumpy;
if (PageDirty(page))
@@ -962,6 +980,10 @@ free_it:
list_add(&page->lru, &free_pages);
continue;

+pass_thp:
+ unlock_page(page);
+ putback_lru_page(page);
+ continue;
cull_mlocked:
if (PageSwapCache(page))
try_to_free_swap(page);
> convinced it's worthy(even with it, we still will mess lru a little). So
> I'm back to use the v2 patch if no better solution, it's still much
> better than current code.
>
> Thanks,
> Shaohua
>

--
Kind regards,
Minchan Kim


\
 
 \ /
  Last update: 2011-11-10 03:25    [W:0.062 / U:0.732 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site