lkml.org 
[lkml]   [2006]   [Jan]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 3/4] net: Percpufy frequently used variables -- proto.sockets_allocated
Ravikiran G Thirumalai a écrit :
> On Sat, Jan 28, 2006 at 01:35:03AM +0100, Eric Dumazet wrote:
>> Eric Dumazet a écrit :
>>> Andrew Morton a écrit :
>>>> Eric Dumazet <dada1@cosmosbay.com> wrote:
>>>
>>> long percpu_counter_read_accurate(struct percpu_counter *fbc)
>>> {
>>> long res = 0;
>>> int cpu;
>>> atomic_long_t *pcount;
>>>
>>> for_each_cpu(cpu) {
>>> pcount = per_cpu_ptr(fbc->counters, cpu);
>>> /* dont dirty cache line if not necessary */
>>> if (atomic_long_read(pcount))
>>> res += atomic_long_xchg(pcount, 0);
> ---------------------------> (A)
>>> }
>
>> atomic_long_add(res, &fbc->count);
> ---------------------------> (B)
>> res = atomic_long_read(&fbc->count);
>>
>>> return res;
>>> }
>
> The read is still theoritically FBC_BATCH * NR_CPUS inaccurate no?
> What happens when other cpus update their local counters at (A) and (B)?

Well, after atomic_read(&some_atomic) or percpu_counter_read_accurate(&s) the
'value' you got may be inaccurate by whatever... You cannot 'freeze the
atomic / percpu_counter and gets its accurate value'. All you can do is
'atomically add/remove some value from this counter (atomic or percpu_counter))

See percpu_counter_read_accurate() as an attempt to collect all local offset
(and atomically set them to 0) and atomically reajust the central fbc->count
to with the sum of all these offsets. Kind of a consolidation that might be
driven by a user request (read /proc/sys/...) or a periodic timer.



>
> (I am hoping we don't need percpu_counter_read_accurate anywhere yet and
> this is just demo ;). I certainly don't want to go on all cpus for read /
> add cmpxchg on the write path for the proto counters that started this
> discussion)

As pointed by Andrew (If you decode carefully its short sentences :) ), all
you really need is atomic_long_xchg() (only in slow path), and
atomic_long_{read/add} in fast path)


Eric
struct percpu_counter {
atomic_long_t count;
atomic_long_t *counters;
};

#ifdef CONFIG_SMP
void percpu_counter_mod(struct percpu_counter *fbc, long amount)
{
long new;
atomic_long_t *pcount;

pcount = per_cpu_ptr(fbc->counters, get_cpu());

new = atomic_long_read(pcount) + amount;

if (new >= FBC_BATCH || new <= -FBC_BATCH) {
new = atomic_long_xchg(pcount, 0) + amount;
if (new)
atomic_long_add(new, &fbc->count);
} else
atomic_long_add(amount, pcount);

put_cpu();
}
EXPORT_SYMBOL(percpu_counter_mod);

long percpu_counter_read_accurate(struct percpu_counter *fbc)
{
long res = 0;
int cpu;
atomic_long_t *pcount;

for_each_cpu(cpu) {
pcount = per_cpu_ptr(fbc->counters, cpu);
/* dont dirty cache line if not necessary */
if (atomic_long_read(pcount))
res += atomic_long_xchg(pcount, 0);
}
atomic_long_add(res, &fbc->count);
return atomic_long_read(&fbc->count);
}
EXPORT_SYMBOL(percpu_counter_read_accurate);
#endif /* CONFIG_SMP */

\
 
 \ /
  Last update: 2006-01-28 08:21    [W:0.311 / U:0.720 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site