lkml.org 
[lkml]   [2009]   [Jun]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [PATCH for mmotm 2/5]
Date
> On Thu, Jun 11, 2009 at 07:26:48PM +0900, KOSAKI Motohiro wrote:
> > Changes since Wu's original patch
> > - adding vmstat
> > - rename NR_TMPFS_MAPPED to NR_SWAP_BACKED_FILE_MAPPED
> >
> >
> > ----------------------
> > Subject: [PATCH] introduce NR_SWAP_BACKED_FILE_MAPPED zone stat
>
> This got lost in the actual subject line.
>
> > Desirable zone reclaim implementaion want to know the number of
> > file-backed and unmapped pages.
> >
>
> There needs to be more justification for this. We need an example
> failure case that this addresses. For example, Patch 1 of my series was
> to address the following problem included with the patchset leader
>
> "The reported problem was that malloc() stalled for a long time (minutes
> in some cases) if a large tmpfs mount was occupying a large percentage of
> memory overall. The pages did not get cleaned or reclaimed by zone_reclaim()
> because the zone_reclaim_mode was unsuitable, but the lists are uselessly
> scanned frequencly making the CPU spin at near 100%."
>
> We should have a similar case.
>
> What "desirable" zone_reclaim() should be spelled out as well. Minimally
> something like
>
> "For zone_reclaim() to be efficient, it must be able to detect in advance
> if the LRU scan will reclaim the necessary pages with the limitations of
> the current zone_reclaim_mode. Otherwise, the CPU usage is increases as
> zone_reclaim() uselessly scans the LRU list.
>
> The problem with the heuristic is ....
>
> This patch fixes the heuristic by ...."
>
> etc?
>
> I'm not trying to be awkward. I believe I provided similar reasoning
> with my own patchset.

You are right. my intention is not actual issue, it only fix
documentation lie.

Documentation/sysctl/vm.txt says
=============================================================

min_unmapped_ratio:

This is available only on NUMA kernels.

A percentage of the total pages in each zone. Zone reclaim will only
occur if more than this percentage of pages are file backed and unmapped.
This is to insure that a minimal amount of local pages is still available for
file I/O even if the node is overallocated.

The default is 1 percent.
==============================================================

but actual code don't account "percentage of file backed and unmapped".
Administrator can't imazine current implementation form this documentation.

Plus, I don't think this patch is too messy. thus I did decide to make
this fix.

if anyone provide good documentation fix, my worry will vanish.



> > Thus, we need to know number of swap-backed mapped pages for
> > calculate above number.
> >
> >
> > Cc: Mel Gorman <mel@csn.ul.ie>
> > Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
> > ---
> > include/linux/mmzone.h | 2 ++
> > mm/rmap.c | 7 +++++++
> > mm/vmstat.c | 1 +
> > 3 files changed, 10 insertions(+)
> >
> > Index: b/include/linux/mmzone.h
> > ===================================================================
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -88,6 +88,8 @@ enum zone_stat_item {
> > NR_ANON_PAGES, /* Mapped anonymous pages */
> > NR_FILE_MAPPED, /* pagecache pages mapped into pagetables.
> > only modified from process context */
> > + NR_SWAP_BACKED_FILE_MAPPED, /* Similar to NR_FILE_MAPPED. but
> > + only account swap-backed pages */
> > NR_FILE_PAGES,
> > NR_FILE_DIRTY,
> > NR_WRITEBACK,
> > Index: b/mm/rmap.c
> > ===================================================================
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -829,6 +829,10 @@ void page_add_file_rmap(struct page *pag
> > {
> > if (atomic_inc_and_test(&page->_mapcount)) {
> > __inc_zone_page_state(page, NR_FILE_MAPPED);
> > + if (PageSwapBacked(page))
> > + __inc_zone_page_state(page,
> > + NR_SWAP_BACKED_FILE_MAPPED);
> > +
> > mem_cgroup_update_mapped_file_stat(page, 1);
> > }
> > }
> > @@ -884,6 +888,9 @@ void page_remove_rmap(struct page *page)
> > __dec_zone_page_state(page, NR_ANON_PAGES);
> > } else {
> > __dec_zone_page_state(page, NR_FILE_MAPPED);
> > + if (PageSwapBacked(page))
> > + __dec_zone_page_state(page,
> > + NR_SWAP_BACKED_FILE_MAPPED);
> > }
> > mem_cgroup_update_mapped_file_stat(page, -1);
> > /*
> > Index: b/mm/vmstat.c
> > ===================================================================
> > --- a/mm/vmstat.c
> > +++ b/mm/vmstat.c
> > @@ -633,6 +633,7 @@ static const char * const vmstat_text[]
> > "nr_mlock",
> > "nr_anon_pages",
> > "nr_mapped",
> > + "nr_swap_backed_file_mapped",
> > "nr_file_pages",
> > "nr_dirty",
> > "nr_writeback",
> >
>
> Otherwise the patch seems reasonable.
>
> --
> Mel Gorman
> Part-time Phd Student Linux Technology Center
> University of Limerick IBM Dublin Software Lab





\
 
 \ /
  Last update: 2009-06-11 13:53    [W:0.055 / U:0.752 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site