lkml.org 
[lkml]   [2012]   [Jun]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 2/2] memcg: add per cgroup dirty pages accounting
    (2012/06/19 23:31), Sha Zhengju wrote:
    > On Sat, Jun 16, 2012 at 2:34 PM, Kamezawa Hiroyuki
    > <kamezawa.hiroyu@jp.fujitsu.com> wrote:
    >> (2012/06/16 0:32), Greg Thelen wrote:
    >>>
    >>> On Fri, Jun 15 2012, Sha Zhengju wrote:
    >>>
    >>>> This patch adds memcg routines to count dirty pages. I notice that
    >>>> the list has talked about per-cgroup dirty page limiting
    >>>> (http://lwn.net/Articles/455341/) before, but it did not get merged.
    >>>
    >>>
    >>> Good timing, I was just about to make another effort to get some of
    >>> these patches upstream. Like you, I was going to start with some basic
    >>> counters.
    >>>
    >>> Your approach is similar to what I have in mind. While it is good to
    >>> use the existing PageDirty flag, rather than introducing a new
    >>> page_cgroup flag, there are locking complications (see below) to handle
    >>> races between moving pages between memcg and the pages being {un}marked
    >>> dirty.
    >>>
    >>>> I've no idea how is this going now, but maybe we can add per cgroup
    >>>> dirty pages accounting first. This allows the memory controller to
    >>>> maintain an accurate view of the amount of its memory that is dirty
    >>>> and can provide some infomation while group's direct reclaim is working.
    >>>>
    >>>> After commit 89c06bd5 (memcg: use new logic for page stat accounting),
    >>>> we do not need per page_cgroup flag anymore and can directly use
    >>>> struct page flag.
    >>>>
    >>>>
    >>>> Signed-off-by: Sha Zhengju<handai.szj@taobao.com>
    >>>> ---
    >>>> include/linux/memcontrol.h | 1 +
    >>>> mm/filemap.c | 1 +
    >>>> mm/memcontrol.c | 32 +++++++++++++++++++++++++-------
    >>>> mm/page-writeback.c | 2 ++
    >>>> mm/truncate.c | 1 +
    >>>> 5 files changed, 30 insertions(+), 7 deletions(-)
    >>>>
    >>>> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
    >>>> index a337c2e..8154ade 100644
    >>>> --- a/include/linux/memcontrol.h
    >>>> +++ b/include/linux/memcontrol.h
    >>>> @@ -39,6 +39,7 @@ enum mem_cgroup_stat_index {
    >>>> MEM_CGROUP_STAT_FILE_MAPPED, /* # of pages charged as file rss */
    >>>> MEM_CGROUP_STAT_SWAPOUT, /* # of pages, swapped out */
    >>>> MEM_CGROUP_STAT_DATA, /* end of data requires synchronization */
    >>>> + MEM_CGROUP_STAT_FILE_DIRTY, /* # of dirty pages in page cache */
    >>>> MEM_CGROUP_STAT_NSTATS,
    >>>> };
    >>>>
    >>>> diff --git a/mm/filemap.c b/mm/filemap.c
    >>>> index 79c4b2b..5b5c121 100644
    >>>> --- a/mm/filemap.c
    >>>> +++ b/mm/filemap.c
    >>>> @@ -141,6 +141,7 @@ void __delete_from_page_cache(struct page *page)
    >>>> * having removed the page entirely.
    >>>> */
    >>>> if (PageDirty(page)&& mapping_cap_account_dirty(mapping)) {
    >>>> + mem_cgroup_dec_page_stat(page,
    >>>> MEM_CGROUP_STAT_FILE_DIRTY);
    >>>
    >>>
    >>> You need to use mem_cgroup_{begin,end}_update_page_stat around critical
    >>> sections that:
    >>> 1) check PageDirty
    >>> 2) update MEM_CGROUP_STAT_FILE_DIRTY counter
    >>>
    >>> This protects against the page from being moved between memcg while
    >>> accounting. Same comment applies to all of your new calls to
    >>> mem_cgroup_{dec,inc}_page_stat. For usage pattern, see
    >>> page_add_file_rmap.
    >>>
    >>
    >> If you feel some difficulty with mem_cgroup_{begin,end}_update_page_stat(),
    >> please let me know...I hope they should work enough....
    >>
    >
    > Hi, Kame
    >
    > While digging into the bigger lock of mem_cgroup_{begin,end}_update_page_stat(),
    > I find the reality is more complex than I thought. Simply stated,
    > modifying page info
    > and update page stat may be wide apart and in different level (eg.
    > mm&fs), so if we
    > use the big lock it may lead to scalability and maintainability issues.
    >
    > For example:
    > mem_cgroup_begin_update_page_stat()
    > modify page information => TestSetPageDirty in ceph_set_page_dirty() (fs/ceph/addr.c)
    > XXXXXX => other fs operations
    > mem_cgroup_update_page_stat() => account_page_dirtied() in mm/page-writeback.c
    > mem_cgroup_end_update_page_stat().
    >
    > We can choose to get lock in higher level meaning vfs set_page_dirty()
    > but this may span
    > too much and can also have some missing cases.
    > What's your opinion of this problem?
    >

    yes, that's sad....If set_page_dirty() is always called under lock_page(), the
    story will be easier (we'll take lock_page() in move side.)
    but the comment on set_page_dirty() says it's not true.....Now, I haven't found a magical
    way for avoiding the race.
    (*) If holding lock_page() in move_account() can be a generic solution, it will be good.

    A proposal from me is a small-start. You can start from adding hooks to a generic
    functions as set_page_dirty() and __set_page_dirty_nobuffers(), clear_page_dirty_for_io().

    And see what happens. I guess we can add WARN_ONCE() against callers of update_page_stat()
    who don't take mem_cgroup_begin/end_update_page_stat()
    (by some new check, for example, checking !rcu_read_lock_held() in update_stat())

    I think we can make TODO list and catch up remaining things one by one.

    Thanks,
    -Kame

















    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2012-06-21 10:41    [W:0.034 / U:2.388 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site