lkml.org 
[lkml]   [2019]   [Nov]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] percpu-refcount: Use normal instead of RCU-sched"
On Fri, 8 Nov 2019, Sebastian Andrzej Siewior wrote:

> diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
> index 7aef0abc194a2..390031e816dcd 100644
> --- a/include/linux/percpu-refcount.h
> +++ b/include/linux/percpu-refcount.h
> @@ -186,14 +186,14 @@ static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr)
> {
> unsigned long __percpu *percpu_count;
>
> - rcu_read_lock_sched();
> + rcu_read_lock();
>
> if (__ref_is_percpu(ref, &percpu_count))
> this_cpu_add(*percpu_count, nr);

You can use

__this_cpu_add()

instead since rcu_read_lock implies preempt disable.

This will not change the code for x86 but other platforms that do not use
atomic operation here will be able to avoid including to code to disable
preempt for the per cpu operations.

Same is valid for all other per cpu operations in the patch.

\
 
 \ /
  Last update: 2019-11-08 19:18    [W:0.640 / U:0.512 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site