lkml.org 
[lkml]   [2011]   [Feb]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: memcg: save 20% of per-page memcg memory overhead
* Johannes Weiner <hannes@cmpxchg.org> [2011-02-03 15:26:01]:

> This patch series removes the direct page pointer from struct
> page_cgroup, which saves 20% of per-page memcg memory overhead (Fedora
> and Ubuntu enable memcg per default, openSUSE apparently too).
>
> The node id or section number is encoded in the remaining free bits of
> pc->flags which allows calculating the corresponding page without the
> extra pointer.
>
> I ran, what I think is, a worst-case microbenchmark that just cats a
> large sparse file to /dev/null, because it means that walking the LRU
> list on behalf of per-cgroup reclaim and looking up pages from
> page_cgroups is happening constantly and at a high rate. But it made
> no measurable difference. A profile reported a 0.11% share of the new
> lookup_cgroup_page() function in this benchmark.

Wow! defintely worth a deeper look.

>
> Hannes

--
Three Cheers,
Balbir


\
 
 \ /
  Last update: 2011-02-03 16:05    [W:0.080 / U:0.536 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site