lkml.org 
[lkml]   [2008]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] percpu_counter: Fix __percpu_counter_sum()
From
Date
On Wed, 2008-12-10 at 06:09 +0100, Eric Dumazet wrote:
> Now percpu_counter_sum() is 'fixed', what about "percpu_counter_add()" ?
>
> void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch)
> {
> s64 count;
> s32 *pcount;
> int cpu = get_cpu();
>
> pcount = per_cpu_ptr(fbc->counters, cpu);
> count = *pcount + amount;
> if (count >= batch || count <= -batch) {
> spin_lock(&fbc->lock);
> fbc->count += count;
> *pcount = 0;
> spin_unlock(&fbc->lock);
> } else {
> *pcount = count;
> }
> put_cpu();
> }
>
>
> If I read this well, this is not IRQ safe.
>
> get_cpu() only disables preemption IMHO
>
> For nr_files, nr_dentry, nr_inodes, it should not be a problem.
>
> But for network counters (only in net-next-2.6)
> and lib/proportions.c, we have a problem ?

Non of percpu_counter if irqsafe, for lib/proportions I disabled irqs by
hand when needed - I don't think we ought to bother with local_t, esp as
it basically sucks chunks on anything !x86.




\
 
 \ /
  Last update: 2008-12-10 08:15    [W:0.353 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site