lkml.org 
[lkml]   [2008]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] percpu_counter: Fix __percpu_counter_sum()
Now percpu_counter_sum() is 'fixed', what about "percpu_counter_add()" ?

void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch)
{
s64 count;
s32 *pcount;
int cpu = get_cpu();

pcount = per_cpu_ptr(fbc->counters, cpu);
count = *pcount + amount;
if (count >= batch || count <= -batch) {
spin_lock(&fbc->lock);
fbc->count += count;
*pcount = 0;
spin_unlock(&fbc->lock);
} else {
*pcount = count;
}
put_cpu();
}


If I read this well, this is not IRQ safe.

get_cpu() only disables preemption IMHO

For nr_files, nr_dentry, nr_inodes, it should not be a problem.

But for network counters (only in net-next-2.6)
and lib/proportions.c, we have a problem ?

Using local_t instead of s32 for cpu
local counter here is possible, so that fast path doesnt have
to disable interrupts

(use a local_t instead of s32 for fbc->counters)

void __percpu_counter_add_irqsafe(struct percpu_counter *fbc, s64 amount, s32 batch)
{
long count;
local_t *pcount;

/* following code only matters on 32bit arches */
if (sizeof(amount) != sizeof(local_t)) {
if (unlikely(amount >= batch || amount <= -batch))) {
spin_lock_irqsave(&fbc->lock, flags);
fbc->count += amount;
spin_unlock_irqrestore(&fbc->lock, flags);
return;
}
}
pcount = per_cpu_ptr(fbc->counters, get_cpu());
count = local_add_return((long)amount, pcount);
if (unlikely(count >= batch || count <= -batch)) {
unsigned long flags;

local_sub(count, pcount);
spin_lock_irqsave(&fbc->lock, flags);
fbc->count += count;
spin_unlock_irqrestore(&fbc->lock, flags);
}
put_cpu();
}


\
 
 \ /
  Last update: 2008-12-10 06:13    [W:0.190 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site