lkml.org 
[lkml]   [2012]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] Cgroup: Fix memory accounting scalability in shrink_page_list
    On Thu 19-07-12 16:34:26, Tim Chen wrote:
    [...]
    > diff --git a/mm/vmscan.c b/mm/vmscan.c
    > index 33dc256..aac5672 100644
    > --- a/mm/vmscan.c
    > +++ b/mm/vmscan.c
    > @@ -779,6 +779,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
    >
    > cond_resched();
    >
    > + mem_cgroup_uncharge_start();
    > while (!list_empty(page_list)) {
    > enum page_references references;
    > struct address_space *mapping;

    Is this safe? We have a scheduling point few lines below. What prevents
    from task move while we are in the middle of the batch?

    > @@ -1026,6 +1027,7 @@ keep_lumpy:
    >
    > list_splice(&ret_pages, page_list);
    > count_vm_events(PGACTIVATE, pgactivate);
    > + mem_cgroup_uncharge_end();
    > *ret_nr_dirty += nr_dirty;
    > *ret_nr_writeback += nr_writeback;
    > return nr_reclaimed;
    >
    >
    > --
    > To unsubscribe, send a message with 'unsubscribe linux-mm' in
    > the body to majordomo@kvack.org. For more info on Linux MM,
    > see: http://www.linux-mm.org/ .
    > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

    --
    Michal Hocko
    SUSE Labs
    SUSE LINUX s.r.o.
    Lihovarska 1060/12
    190 00 Praha 9
    Czech Republic


    \
     
     \ /
      Last update: 2012-07-20 18:02    [W:0.023 / U:1.116 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site