lkml.org 
[lkml]   [2010]   [Jan]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] Shared page accounting for memory cgroup
On Thu, 7 Jan 2010 12:45:54 +0530
Balbir Singh <balbir@linux.vnet.ibm.com> wrote:

> * KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> [2010-01-06 16:12:11]:
> > And piles up costs ? I think cgroup guys should pay attention to fork/exit
> > costs more. Now, it gets slower and slower.
> > In that point, I never like migrate-at-task-move work in cpuset and memcg.
> >
> > My 1st objection to this patch is this "shared" doesn't mean "shared between
> > cgroup" but means "shared between processes".
> > I think it's of no use and no help to users.
> >
>
> So what in your opinion would help end users? My concern is that as
> we make progress with memcg, we account only for privately used pages
> with no hint/data about the real usage (shared within or with other
> cgroups).

The real usage is already shown as

[root@bluextal ref-mmotm]# cat /cgroups/memory.stat
cache 7706181632
rss 120905728
mapped_file 32239616

This is real. And "sum of rss - rss+mapped" doesn't show anything.

> How do we decide if one cgroup is really heavy?
>

What "heavy" means ? "Hard to page out ?"

Historically, it's caught by pagein/pageout _speed_.
"How heavy memory system is ?" can only be measured by "speed".
If you add latency-stat for memcg, I'm glad to use it.

Anyway, "How memory reclaim can go successfully" is generic problem rather
than memcg. Maybe no good answers from VM guys....
I think you should add codes to global VM rather than cgroup.

"How pages are shared" doesn't show good hints. I don't hear such parameter
is used in production's resource monitoring software.


> > And implementation is 2nd thing.
> >
>
> More details on your concern, please!
>
I already wrote....why do you want to make fork()/exit() slow for a thing
which is not necessary to be done in atomic ?

There are many hosts which has thousands of process and a cgrop may contain
thousands of process in production server.
In that situation, How the "make kernel" can slow down with following ?
==
while true; do cat /cgroup/memory.shared > /dev/null; done
==

In a word, the implementation problem is
- An operation against a container can cause generic system slow down.
Then, I don't like heavy task move under cgroup.


Yes, this can happen in other places (we have to do some improvements).
But this is not good for a concept of isolation by container, anyway.

Thanks,
-Kame




\
 
 \ /
  Last update: 2010-01-07 08:41    [W:2.142 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site