lkml.org 
[lkml]   [2008]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [mm] [PATCH 3/4] Memory cgroup hierarchical reclaim
On Sun, 02 Nov 2008 00:18:49 +0530
Balbir Singh <balbir@linux.vnet.ibm.com> wrote:

>
> This patch introduces hierarchical reclaim. When an ancestor goes over its
> limit, the charging routine points to the parent that is above its limit.
> The reclaim process then starts from the last scanned child of the ancestor
> and reclaims until the ancestor goes below its limit.
>
> Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com>
> ---
>
> mm/memcontrol.c | 153 +++++++++++++++++++++++++++++++++++++++++++++++---------
> 1 file changed, 129 insertions(+), 24 deletions(-)
>
> diff -puN mm/memcontrol.c~memcg-hierarchical-reclaim mm/memcontrol.c
> --- linux-2.6.28-rc2/mm/memcontrol.c~memcg-hierarchical-reclaim 2008-11-02 00:14:59.000000000 +0530
> +++ linux-2.6.28-rc2-balbir/mm/memcontrol.c 2008-11-02 00:14:59.000000000 +0530
> @@ -132,6 +132,11 @@ struct mem_cgroup {
> * statistics.
> */
> struct mem_cgroup_stat stat;
> + /*
> + * While reclaiming in a hiearchy, we cache the last child we
> + * reclaimed from.
> + */
> + struct mem_cgroup *last_scanned_child;
> };
> static struct mem_cgroup init_mem_cgroup;
>
> @@ -467,6 +472,125 @@ unsigned long mem_cgroup_isolate_pages(u
> return nr_taken;
> }
>
> +static struct mem_cgroup *
> +mem_cgroup_from_res_counter(struct res_counter *counter)
> +{
> + return container_of(counter, struct mem_cgroup, res);
> +}
> +
> +/*
> + * Dance down the hierarchy if needed to reclaim memory. We remember the
> + * last child we reclaimed from, so that we don't end up penalizing
> + * one child extensively based on its position in the children list
> + */
> +static int
> +mem_cgroup_hierarchical_reclaim(struct mem_cgroup *mem, gfp_t gfp_mask)
> +{
> + struct cgroup *cg, *cg_current, *cgroup;
> + struct mem_cgroup *mem_child;
> + int ret = 0;
> +
> + if (try_to_free_mem_cgroup_pages(mem, gfp_mask))
> + return -ENOMEM;
> +
> + /*
> + * try_to_free_mem_cgroup_pages() might not give us a full
> + * picture of reclaim. Some pages are reclaimed and might be
> + * moved to swap cache or just unmapped from the cgroup.
> + * Check the limit again to see if the reclaim reduced the
> + * current usage of the cgroup before giving up
> + */
> + if (res_counter_check_under_limit(&mem->res))
> + return 0;
> +
> + /*
> + * Scan all children under the mem_cgroup mem
> + */
> + if (!mem->last_scanned_child)
> + cgroup = list_first_entry(&mem->css.cgroup->children,
> + struct cgroup, sibling);
> + else
> + cgroup = mem->last_scanned_child->css.cgroup;
> +
> + cg_current = cgroup;
> +
> + /*
> + * We iterate twice, one of it is fundamental list issue, where
> + * the elements are inserted using list_add and hence the list
> + * behaves like a stack and list_for_entry_safe_from() stops
> + * after seeing the first child. The two loops help us work
> + * independently of the insertion and it helps us get a full pass at
> + * scanning all list entries for reclaim
> + */
> + list_for_each_entry_safe_from(cgroup, cg, &cg_current->parent->children,
> + sibling) {
> + mem_child = mem_cgroup_from_cont(cgroup);
> +
> + /*
> + * Move beyond last scanned child
> + */
> + if (mem_child == mem->last_scanned_child)
> + continue;
> +
> + ret = try_to_free_mem_cgroup_pages(mem_child, gfp_mask);
> + mem->last_scanned_child = mem_child;
> +
> + if (res_counter_check_under_limit(&mem->res)) {
> + ret = 0;
> + goto done;
> + }
> + }

Is this safe against cgroup create/remove ? cgroup_mutex is held ?

Thanks,
-Kame



\
 
 \ /
  Last update: 2008-11-02 06:39    [W:0.153 / U:7.788 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site