lkml.org 
[lkml]   [2011]   [May]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [rfc patch 0/6] mm: memcg naturalization
* Johannes Weiner <hannes@cmpxchg.org> [2011-05-16 12:57:29]:

> On Mon, May 16, 2011 at 04:00:34PM +0530, Balbir Singh wrote:
> > * Johannes Weiner <hannes@cmpxchg.org> [2011-05-12 16:53:52]:
> >
> > > Hi!
> > >
> > > Here is a patch series that is a result of the memcg discussions on
> > > LSF (memcg-aware global reclaim, global lru removal, struct
> > > page_cgroup reduction, soft limit implementation) and the recent
> > > feature discussions on linux-mm.
> > >
> > > The long-term idea is to have memcgs no longer bolted to the side of
> > > the mm code, but integrate it as much as possible such that there is a
> > > native understanding of containers, and that the traditional !memcg
> > > setup is just a singular group. This series is an approach in that
> > > direction.
> > >
> > > It is a rather early snapshot, WIP, barely tested etc., but I wanted
> > > to get your opinions before further pursuing it. It is also part of
> > > my counter-argument to the proposals of adding memcg-reclaim-related
> > > user interfaces at this point in time, so I wanted to push this out
> > > the door before things are merged into .40.
> > >
> > > The patches are quite big, I am still looking for things to factor and
> > > split out, sorry for this. Documentation is on its way as well ;)
> > >
> > > #1 and #2 are boring preparational work. #3 makes traditional reclaim
> > > in vmscan.c memcg-aware, which is a prerequisite for both removal of
> > > the global lru in #5 and the way I reimplemented soft limit reclaim in
> > > #6.
> >
> > A large part of the acceptance would be based on what the test results
> > for common mm benchmarks show.
>
> I will try to ensure the following things:
>
> 1. will not degrade performance on !CONFIG_MEMCG kernels
>
> 2. will not degrade performance on CONFIG_MEMCG kernels without
> configured memcgs. This might be the most important one as most
> desktop/server distributions enable the memory controller per default
>
> 3. will not degrade overall performance of workloads running
> concurrently in separate memory control groups. I expect some shifts,
> however, that even out performance differences.
>
> Please let me know what you consider common mm benchmarks.

1, 2 and 3 do sound nice, what workload do you intend to run? We used
reaim, lmbench, page fault rate based tests.

--
Three Cheers,
Balbir


\
 
 \ /
  Last update: 2011-05-17 08:35    [W:0.120 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site