Messages in this thread | | | Date | Mon, 4 Jan 2010 09:35:28 +0900 | From | KAMEZAWA Hiroyuki <> | Subject | Re: [RFC] Shared page accounting for memory cgroup |
| |
On Mon, 4 Jan 2010 05:37:52 +0530 Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2010-01-04 08:51:08]: > > > On Tue, 29 Dec 2009 23:57:43 +0530 > > Balbir Singh <balbir@linux.vnet.ibm.com> wrote: > > > > > Hi, Everyone, > > > > > > I've been working on heuristics for shared page accounting for the > > > memory cgroup. I've tested the patches by creating multiple cgroups > > > and running programs that share memory and observed the output. > > > > > > Comments? > > > > Hmm? Why we have to do this in the kernel ? > > > > For several reasons that I can think of > > 1. With task migration changes coming in, getting consistent data free of races > is going to be hard.
Hmm, Let's see real-worlds's "ps" or "top" command. Even when there are no guarantee of error range of data, it's still useful.
> 2. The cost of doing it in the kernel is not high, it does not impact > the memcg runtime, it is a request-response sort of cost. > > 3. The cost in user space is going to be high and the implementation > cumbersome to get right. > I don't like moving a cost in the userland to the kernel. Considering real-time kernel or full-preemptive kernel, this very long read_lock() in the kernel is not good, IMHO. (I think css_set_lock should be mutex/rw-sem...) cgroup_iter_xxx can block cgroup_post_fork() and this may cause critical system delay of milli-seconds.
BTW, if you really want to calculate somthing in atomic, I think following interface may be welcomed for freezing.
cgroup.lock # echo 1 > /...../cgroup.lock All task move, mkdir, rmdir to this cgroup will be blocked by mutex. (But fork/exit will not be blocked.)
# echo 0 > /...../cgroup.lock Unlock.
# cat /...../cgroup.lock show lock status and lock history (for debug).
Maybe good for some kinds of middleware. But this may be difficult if we have to consider hierarchy.
Thanks, -Kame
| |