lkml.org 
[lkml]   [2011]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] Revert "memcg: add memory.vmscan_stat"
On Thu, Sep 01, 2011 at 12:04:24AM -0700, Ying Han wrote:
> On Wed, Aug 31, 2011 at 11:40 PM, Johannes Weiner <jweiner@redhat.com> wrote:
> > On Wed, Aug 31, 2011 at 11:05:51PM -0700, Ying Han wrote:
> >> On Tue, Aug 30, 2011 at 1:42 AM, Johannes Weiner <jweiner@redhat.com> wrote:
> >> > You want to look at A and see whether its limit was responsible for
> >> > reclaim scans in any children.  IMO, that is asking the question
> >> > backwards.  Instead, there is a cgroup under reclaim and one wants to
> >> > find out the cause for that.  Not the other way round.
> >> >
> >> > In my original proposal I suggested differentiating reclaim caused by
> >> > internal pressure (due to own limit) and reclaim caused by
> >> > external/hierarchical pressure (due to limits from parents).
> >> >
> >> > If you want to find out why C is under reclaim, look at its reclaim
> >> > statistics.  If the _limit numbers are high, C's limit is the problem.
> >> > If the _hierarchical numbers are high, the problem is B, A, or
> >> > physical memory, so you check B for _limit and _hierarchical as well,
> >> > then move on to A.
> >> >
> >> > Implementing this would be as easy as passing not only the memcg to
> >> > scan (victim) to the reclaim code, but also the memcg /causing/ the
> >> > reclaim (root_mem):
> >> >
> >> >        root_mem == victim -> account to victim as _limit
> >> >        root_mem != victim -> account to victim as _hierarchical
> >> >
> >> > This would make things much simpler and more natural, both the code
> >> > and the way of tracking down a problem, IMO.
> >>
> >> This is pretty much the stats I am currently using for debugging the
> >> reclaim patches. For example:
> >>
> >> scanned_pages_by_system 0
> >> scanned_pages_by_system_under_hierarchy 50989
> >>
> >> scanned_pages_by_limit 0
> >> scanned_pages_by_limit_under_hierarchy 0
> >>
> >> "_system" is count under global reclaim, and "_limit" is count under
> >> per-memcg reclaim.
> >> "_under_hiearchy" is set if memcg is not the one triggering pressure.
> >
> > I don't get this distinction between _system and _limit.  How is it
> > orthogonal to _limit vs. _hierarchy, i.e. internal vs. external?
>
> Something like :
>
> +enum mem_cgroup_scan_context {
> + SCAN_BY_SYSTEM,
> + SCAN_BY_SYSTEM_UNDER_HIERARCHY,
> + SCAN_BY_LIMIT,
> + SCAN_BY_LIMIT_UNDER_HIERARCHY,
> + NR_SCAN_CONTEXT,
> +};
>
> if (global_reclaim(sc))
> context = scan_by_system
> else
> context = scan_by_limit
>
> if (target != mem)
> context++;

I understand what you count, just not why. If we just had

SCAN_LIMIT
SCAN_HIERARCHY

wouldn't it be able to convey all that is necessary? Global pressure
is just hierarchical pressure, it comes from the outermost 'container'
that is the machine itself.

If you have one just memcg, SCAN_LIMIT shows reclaim pressure because
of the limit and SCAN_HIERARCHY shows global pressure.

With a hierarchical setup, you can find pressure either in SCAN_LIMIT
or by looking at SCAN_HIERARCHY and recursively check the parent.

root_mem_cgroup
/
A
/
B

Where is the difference for B whether outside pressure is coming from
physical memory limitations or the limit in A? The problem is not in
B, you have to check the parents anyway.

Or put differently:

root_mem_cgroup
/
A
/
B
/
C

In C, you would account global pressure separately but would not make
a distinction between pressure from A's limit and pressure from B's
limit.

What makes the physical memory limit special that requires the
resulting reclaims to be designated over reclaims due to other
hierarchical limits?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2011-09-01 10:31    [W:1.020 / U:0.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site