lkml.org 
[lkml]   [2009]   [Aug]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: Help Resource Counters Scale better (v4)
    On Tue, 11 Aug 2009 20:14:05 +0530
    Balbir Singh <balbir@linux.vnet.ibm.com> wrote:

    > Enhancement: Remove the overhead of root based resource counter accounting
    >
    > From: Balbir Singh <balbir@linux.vnet.ibm.com>
    >
    > This patch reduces the resource counter overhead (mostly spinlock)
    > associated with the root cgroup. This is a part of the several
    > patches to reduce mem cgroup overhead. I had posted other
    > approaches earlier (including using percpu counters). Those
    > patches will be a natural addition and will be added iteratively
    > on top of these.
    >
    > The patch stops resource counter accounting for the root cgroup.
    > The data for display is derived from the statisitcs we maintain
    > via mem_cgroup_charge_statistics (which is more scalable).
    >
    > The tests results I see on a 24 way show that
    >
    > 1. The lock contention disappears from /proc/lock_stats
    > 2. The results of the test are comparable to running with
    > cgroup_disable=memory.
    >
    > Please test/review.

    I don't get it.

    The patch apepars to skip accounting altogether for the root memcgroup
    and then adds some accounting back in for swap. Or something like
    that. How come? Do we actually not need the root memcgroup
    accounting?

    IOW, the changelog sucks ;)

    Is this an alternative approach to using percpu_counters, or do we do
    both or do we choose one or the other? res_counter_charge() really is
    quite sucky.

    The patch didn't have a signoff.

    It would be nice to finalise those performance testing results and
    include them in the new, improved patch description.



    \
     
     \ /
      Last update: 2009-08-12 01:35    [W:0.025 / U:0.988 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site