lkml.org 
[lkml]   [2007]   [Dec]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH] lib: proportion: fix underflow in prop_norm_percpu()
    From
    Date
    On Fri, 2007-12-14 at 17:01 +0100, Peter Zijlstra wrote:
    > Subject: lib: proportion: fix underflow in prop_norm_percpu()
    >
    > Zhe Jiang noticed that its possible to underflow pl->events in
    > prop_norm_percpu() when the value returned by percpu_counter_read() is less
    > than the error on that read and the period delay > 1. In that case half might
    > not trigger the batch increment and the value will be identical on the next
    > iteration, causing the same half to be subtracted again and again.
    >
    > Fix this by rewriting the division as a single subtraction instead of a
    > subtraction loop and using percpu_counter_sum() when the value returned
    > by percpu_counter_read() is smaller than the error.
    >
    > The latter is still needed if we want pl->events to shrink properly in the
    > error region.
    >
    > Jiang, can I get a Reviewed-by from you? - if you agree that is :-)
    >
    > Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    > Cc: zhejiang <zhe.jiang@intel.com>
    > ---
    > lib/proportions.c | 36 +++++++++++++++---------------------
    > 1 file changed, 15 insertions(+), 21 deletions(-)
    >
    > Index: linux-2.6/lib/proportions.c
    > ===================================================================
    > --- linux-2.6.orig/lib/proportions.c
    > +++ linux-2.6/lib/proportions.c
    > @@ -190,6 +190,8 @@ prop_adjust_shift(int *pl_shift, unsigne
    > * PERCPU
    > */
    >
    > +#define PROP_BATCH (8*(1+ilog2(nr_cpu_ids)))
    > +
    > int prop_local_init_percpu(struct prop_local_percpu *pl)
    > {
    > spin_lock_init(&pl->lock);
    > @@ -230,31 +232,23 @@ void prop_norm_percpu(struct prop_global
    >
    > spin_lock_irqsave(&pl->lock, flags);
    > prop_adjust_shift(&pl->shift, &pl->period, pg->shift);
    > +
    > /*
    > * For each missed period, we half the local counter.
    > * basically:
    > * pl->events >> (global_period - pl->period);
    > - *
    > - * but since the distributed nature of percpu counters make division
    > - * rather hard, use a regular subtraction loop. This is safe, because
    > - * the events will only every be incremented, hence the subtraction
    > - * can never result in a negative number.
    > */
    > - while (pl->period != global_period) {
    > - unsigned long val = percpu_counter_read(&pl->events);
    > - unsigned long half = (val + 1) >> 1;
    > -
    > - /*
    > - * Half of zero won't be much less, break out.
    > - * This limits the loop to shift iterations, even
    > - * if we missed a million.
    > - */
    > - if (!val)
    > - break;
    > -
    > - percpu_counter_add(&pl->events, -half);
    > - pl->period += period;
    > - }
    > + period = (global_period - pl->period) >> (pg->shift - 1);
    > + if (period < BITS_PER_LONG) {
    > + s64 val = percpu_counter_read(&pl->events);
    > +
    > + if (val < (nr_cpu_ids * PROP_BATCH))
    > + val = percpu_counter_sum(&pl->events);
    > +
    > + __percpu_counter_add(&pl->events, -val + (val >> period), PROP_BATCH);
    > + } else
    > + percpu_counter_set(&pl->events, 0);
    > +
    > pl->period = global_period;
    > spin_unlock_irqrestore(&pl->lock, flags);
    > }
    > @@ -267,7 +261,7 @@ void __prop_inc_percpu(struct prop_descr
    > struct prop_global *pg = prop_get_global(pd);
    >
    > prop_norm_percpu(pg, pl);
    > - percpu_counter_add(&pl->events, 1);
    > + __percpu_counter_add(&pl->events, 1, PROP_BATCH);
    > percpu_counter_add(&pg->events, 1);
    > prop_put_global(pd, pg);
    > }
    >

    Reviewed-by: Jiang Zhe <zhe.jiang@intel.com>

    Thanks!





    \
     
     \ /
      Last update: 2007-12-17 06:29    [W:0.026 / U:89.904 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site