lkml.org 
[lkml]   [2011]   [Jul]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [patch 00/17] CFS Bandwidth Control v7.1

    * Paul Turner <pjt@google.com> wrote:

    > The summary results (from Hu Tao's most recent run) are:
    > cycles instructions branches
    > -------------------------------------------------------------------------------------------------------------------
    > base 7,526,317,497 8,666,579,347 1,771,078,445
    > +patch, cgroup not enabled 7,610,354,447 (1.12%) 8,569,448,982 (-1.12%) 1,751,675,193 (-0.11%)
    > +patch, 10000000000/1000(quota/period) 7,856,873,327 (4.39%) 8,822,227,540 (1.80%) 1,801,766,182 (1.73%)
    > +patch, 10000000000/10000(quota/period) 7,797,711,600 (3.61%) 8,754,747,746 (1.02%) 1,788,316,969 (0.97%)
    > +patch, 10000000000/100000(quota/period) 7,777,784,384 (3.34%) 8,744,979,688 (0.90%) 1,786,319,566 (0.86%)
    > +patch, 10000000000/1000000(quota/period) 7,802,382,802 (3.67%) 8,755,638,235 (1.03%) 1,788,601,070 (0.99%)
    > ------------------------------------------------------------------------------------------------------------------

    Well, the most recent run Hu Tao sent (with lockdep disabled) are
    different:

    table 2. shows the differences between patch and no-patch. quota is set
    to a large value to avoid processes being throttled.

    quota/period cycles instructions branches
    --------------------------------------------------------------------------------------------------
    base 1,146,384,132 1,151,216,688 212,431,532
    patch cgroup disabled 1,163,717,547 (1.51%) 1,165,238,015 ( 1.22%) 215,092,327 ( 1.25%)
    patch 10000000000/1000 1,244,889,136 (8.59%) 1,299,128,502 (12.85%) 243,162,542 (14.47%)
    patch 10000000000/10000 1,253,305,706 (9.33%) 1,299,167,897 (12.85%) 243,175,027 (14.47%)
    patch 10000000000/100000 1,252,374,134 (9.25%) 1,299,314,357 (12.86%) 243,203,923 (14.49%)
    patch 10000000000/1000000 1,254,165,824 (9.40%) 1,299,751,347 (12.90%) 243,288,600 (14.53%)
    --------------------------------------------------------------------------------------------------


    The +1.5% increase in vanilla kernel context switching performance is
    unfortunate - where does that overhead come from?

    The +9% increase in cgroups context-switching overhead looks rather
    brutal.

    Thanks,

    Ingo


    \
     
     \ /
      Last update: 2011-07-07 13:25    [W:0.022 / U:61.808 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site