lkml.org 
[lkml]   [2011]   [Jul]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch 00/17] CFS Bandwidth Control v7.1
From
Date
On Thu, 2011-07-07 at 13:23 +0200, Ingo Molnar wrote:

> Well, the most recent run Hu Tao sent (with lockdep disabled) are
> different:
>
> table 2. shows the differences between patch and no-patch. quota is set
> to a large value to avoid processes being throttled.
>
> quota/period cycles instructions branches
> --------------------------------------------------------------------------------------------------
> base 1,146,384,132 1,151,216,688 212,431,532
> patch cgroup disabled 1,163,717,547 (1.51%) 1,165,238,015 ( 1.22%) 215,092,327 ( 1.25%)
> patch 10000000000/1000 1,244,889,136 (8.59%) 1,299,128,502 (12.85%) 243,162,542 (14.47%)
> patch 10000000000/10000 1,253,305,706 (9.33%) 1,299,167,897 (12.85%) 243,175,027 (14.47%)
> patch 10000000000/100000 1,252,374,134 (9.25%) 1,299,314,357 (12.86%) 243,203,923 (14.49%)
> patch 10000000000/1000000 1,254,165,824 (9.40%) 1,299,751,347 (12.90%) 243,288,600 (14.53%)
> --------------------------------------------------------------------------------------------------
>
>
> The +1.5% increase in vanilla kernel context switching performance is
> unfortunate - where does that overhead come from?
>
> The +9% increase in cgroups context-switching overhead looks rather
> brutal.

As to those, do they run pipe-test in a cgroup or are you always using
the root cgroup?


\
 
 \ /
  Last update: 2011-07-07 13:31    [W:0.181 / U:0.924 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site