lkml.org 
[lkml]   [2011]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH v5 2/4] memcg: move memcg reclaimable page into tail of inactive list
    From
    Hi Kame,

    On Fri, Feb 18, 2011 at 1:04 AM, KAMEZAWA Hiroyuki
    <kamezawa.hiroyu@jp.fujitsu.com> wrote:
    > On Fri, 18 Feb 2011 00:08:20 +0900
    > Minchan Kim <minchan.kim@gmail.com> wrote:
    >
    >> The rotate_reclaimable_page function moves just written out
    >> pages, which the VM wanted to reclaim, to the end of the
    >> inactive list.  That way the VM will find those pages first
    >> next time it needs to free memory.
    >> This patch apply the rule in memcg.
    >> It can help to prevent unnecessary working page eviction of memcg.
    >>
    >> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
    >> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
    >> Reviewed-by: Rik van Riel <riel@redhat.com>
    >> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
    >> Cc: Johannes Weiner <hannes@cmpxchg.org>
    >> Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
    >> ---
    >> Changelog since v4:
    >>  - add acked-by and reviewed-by
    >>  - change description - suggested by Rik
    >>
    >>  include/linux/memcontrol.h |    6 ++++++
    >>  mm/memcontrol.c            |   27 +++++++++++++++++++++++++++
    >>  mm/swap.c                  |    3 ++-
    >>  3 files changed, 35 insertions(+), 1 deletions(-)
    >>
    >> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
    >> index 3da48ae..5a5ce70 100644
    >> --- a/include/linux/memcontrol.h
    >> +++ b/include/linux/memcontrol.h
    >> @@ -62,6 +62,7 @@ extern int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
    >>                                       gfp_t gfp_mask);
    >>  extern void mem_cgroup_add_lru_list(struct page *page, enum lru_list lru);
    >>  extern void mem_cgroup_del_lru_list(struct page *page, enum lru_list lru);
    >> +extern void mem_cgroup_rotate_reclaimable_page(struct page *page);
    >>  extern void mem_cgroup_rotate_lru_list(struct page *page, enum lru_list lru);
    >>  extern void mem_cgroup_del_lru(struct page *page);
    >>  extern void mem_cgroup_move_lists(struct page *page,
    >> @@ -215,6 +216,11 @@ static inline void mem_cgroup_del_lru_list(struct page *page, int lru)
    >>       return ;
    >>  }
    >>
    >> +static inline inline void mem_cgroup_rotate_reclaimable_page(struct page *page)
    >> +{
    >> +     return ;
    >> +}
    >> +
    >>  static inline void mem_cgroup_rotate_lru_list(struct page *page, int lru)
    >>  {
    >>       return ;
    >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
    >> index 686f1ce..ab8bdff 100644
    >> --- a/mm/memcontrol.c
    >> +++ b/mm/memcontrol.c
    >> @@ -813,6 +813,33 @@ void mem_cgroup_del_lru(struct page *page)
    >>       mem_cgroup_del_lru_list(page, page_lru(page));
    >>  }
    >>
    >> +/*
    >> + * Writeback is about to end against a page which has been marked for immediate
    >> + * reclaim.  If it still appears to be reclaimable, move it to the tail of the
    >> + * inactive list.
    >> + */
    >> +void mem_cgroup_rotate_reclaimable_page(struct page *page)
    >> +{
    >> +     struct mem_cgroup_per_zone *mz;
    >> +     struct page_cgroup *pc;
    >> +     enum lru_list lru = page_lru_base_type(page);
    >> +
    >> +     if (mem_cgroup_disabled())
    >> +             return;
    >> +
    >> +     pc = lookup_page_cgroup(page);
    >> +     /*
    >> +      * Used bit is set without atomic ops but after smp_wmb().
    >> +      * For making pc->mem_cgroup visible, insert smp_rmb() here.
    >> +      */
    >> +     smp_rmb();
    >> +     /* unused or root page is not rotated. */
    >> +     if (!PageCgroupUsed(pc) || mem_cgroup_is_root(pc->mem_cgroup))
    >> +             return;
    >> +     mz = page_cgroup_zoneinfo(pc->mem_cgroup, page);
    >> +     list_move_tail(&pc->lru, &mz->lists[lru]);
    >> +}
    >> +
    >
    > Hmm, I'm sorry I misunderstand this. IIUC, page_lru_base_type() always returns
    > LRU_INACTIVE_XXX and this function may move page from active LRU to inactive LRU.
    >
    > Then, LRU counters for memcg should be updated.

    Goal of mem_cgroup_rotate_reclaimable_page is same with rotate_reclaimable_page.
    It means the page was already in inactive list.
    Look at the check !PageActive(page).

    But if you want to make the function generally(ie, support
    active->inactive, too), I don't mind it. but if you want it, let's
    make rotate_reclaimable_page to general function, too. but now any
    user doesn't use it.

    Thanks for the careful review.

    >
    > Could you replace after lookup like this ?
    >
    >     VM_BUG_ON(!PageCgroupAcctLRU(pc))  /* Implies this pages must be on some LRU */
    >     if (!PageCgroupUsed(pc))
    >           return;
    >     /* Used bit check is not necessary, because there is a case Unused page
    >        is lazily on LRU. We trust AcctLRU bit. */
    >     mz = page_cgroup_zoneinfo(pc->mem_cgroup, page);
    >     MEM_CGROUP_ZSTAT(mz, page_lru(page)) -= 1 << compound_order(page);
    >     MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page)
    >     if (mem_cgroup_is_root(pc->mem_cgroup))
    >           return;
    >     list_move_tail(&pc->lru, &mz->lists[lru])
    >
    >
    > Thanks,
    > -Kame
    >>  void mem_cgroup_rotate_lru_list(struct page *page, enum lru_list lru)
    >>  {
    >>       struct mem_cgroup_per_zone *mz;
    >> diff --git a/mm/swap.c b/mm/swap.c
    >> index 4aea806..1b9e4eb 100644
    >> --- a/mm/swap.c
    >> +++ b/mm/swap.c
    >> @@ -200,8 +200,9 @@ static void pagevec_move_tail(struct pagevec *pvec)
    >>                       spin_lock(&zone->lru_lock);
    >>               }
    >>               if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
    >> -                     int lru = page_lru_base_type(page);
    >> +                     enum lru_list lru = page_lru_base_type(page);
    >>                       list_move_tail(&page->lru, &zone->lru[lru].list);
    >> +                     mem_cgroup_rotate_reclaimable_page(page);
    >>                       pgmoved++;
    >>               }
    >>       }
    >> --
    >> 1.7.1
    >>
    >> --
    >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    >> the body of a message to majordomo@vger.kernel.org
    >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
    >> Please read the FAQ at  http://www.tux.org/lkml/
    >>
    >
    >



    --
    Kind regards,
    Minchan Kim
    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2011-02-18 01:17    [W:0.035 / U:184.568 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site