Messages in this thread Patch in this message | | | Date | Sat, 30 May 2015 10:18:06 -0700 | From | "Paul E. McKenney" <> | Subject | Re: [RFC][PATCH 5/5] percpu-rwsem: Optimize readers and reduce global impact |
| |
On Tue, May 26, 2015 at 01:44:01PM +0200, Peter Zijlstra wrote: > Currently the percpu-rwsem has two issues: > > - it switches to (global) atomic ops while a writer is waiting; > which could be quite a while and slows down releasing the readers. > > - it employs synchronize_sched_expedited() _twice_ which is evil and > should die -- it shoots IPIs around the machine. > > This patch cures the first problem by ordering the reader-state vs > reader-count (see the comments in __percpu_down_read() and > percpu_down_write()). This changes a global atomic op into a full > memory barrier, which doesn't have the global cacheline contention. > > It cures the second problem by employing the rcu-sync primitives by > Oleg which reduces to no sync_sched() calls in the 'normal' case of > no write contention -- global locks had better be rare, and has a > maximum of one sync_sched() call in case of contention. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > include/linux/percpu-rwsem.h | 62 +++++++++- > kernel/locking/percpu-rwsem.c | 238 +++++++++++++++++++++--------------------- > 2 files changed, 176 insertions(+), 124 deletions(-)
[ . . . ]
> --- a/kernel/locking/percpu-rwsem.c > +++ b/kernel/locking/percpu-rwsem.c > @@ -8,158 +8,164 @@ > #include <linux/sched.h> > #include <linux/errno.h> > > -int __percpu_init_rwsem(struct percpu_rw_semaphore *brw, > +enum { readers_slow, readers_block }; > + > +int __percpu_init_rwsem(struct percpu_rw_semaphore *sem, > const char *name, struct lock_class_key *rwsem_key) > { > - brw->fast_read_ctr = alloc_percpu(int); > - if (unlikely(!brw->fast_read_ctr)) > + sem->refcount = alloc_percpu(unsigned int); > + if (unlikely(!sem->refcount)) > return -ENOMEM; > > - /* ->rw_sem represents the whole percpu_rw_semaphore for lockdep */ > - __init_rwsem(&brw->rw_sem, name, rwsem_key); > - atomic_set(&brw->write_ctr, 0); > - atomic_set(&brw->slow_read_ctr, 0); > - init_waitqueue_head(&brw->write_waitq); > + sem->state = readers_slow; > + rcu_sync_init(&sem->rss, RCU_SCHED_SYNC);
But it looks like you need the RCU-sched variant. Please see below for an untested patch providing this support. One benefit of this patch is that it does not add any bloat to Tiny RCU.
Thanx, Paul
------------------------------------------------------------------------
rcu: Add RCU-sched flavors of get-state and cond-sync The get_state_synchronize_rcu() and cond_synchronize_rcu() functions allow polling for grace-period completion, with an actual wait for a grace period occuring only when cond_synchronize_rcu() is called too soon after the corresponding get_state_synchronize_rcu(). However, these functions work only for vanilla RCU. This commit adds the get_state_synchronize_sched() and cond_synchronize_sched(), which provide the same capability for RCU-sched. Reported-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 3df6c1ec4e25..ff968b7af3a4 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -37,6 +37,16 @@ static inline void cond_synchronize_rcu(unsigned long oldstate) might_sleep(); } +static inline unsigned long get_state_synchronize_sched(void) +{ + return 0; +} + +static inline void cond_synchronize_sched(unsigned long oldstate) +{ + might_sleep(); +} + static inline void rcu_barrier_bh(void) { wait_rcu_gp(call_rcu_bh); diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 3fa4a43ab415..80e68d344205 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -76,6 +76,8 @@ void rcu_barrier_bh(void); void rcu_barrier_sched(void); unsigned long get_state_synchronize_rcu(void); void cond_synchronize_rcu(unsigned long oldstate); +unsigned long get_state_synchronize_sched(void); +void cond_synchronize_sched(unsigned long oldstate); extern unsigned long rcutorture_testseq; extern unsigned long rcutorture_vernum; diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index 1cead7806ca6..f256dee0f6b1 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -635,6 +635,8 @@ static struct rcu_torture_ops sched_ops = { .deferred_free = rcu_sched_torture_deferred_free, .sync = synchronize_sched, .exp_sync = synchronize_sched_expedited, + .get_state = get_state_synchronize_sched, + .cond_sync = cond_synchronize_sched, .call = call_rcu_sched, .cb_barrier = rcu_barrier_sched, .fqs = rcu_sched_force_quiescent_state, diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 2fce662fa058..e33e1a8a8d08 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3259,6 +3259,58 @@ void cond_synchronize_rcu(unsigned long oldstate) } EXPORT_SYMBOL_GPL(cond_synchronize_rcu); +/** + * get_state_synchronize_sched - Snapshot current RCU-sched state + * + * Returns a cookie that is used by a later call to cond_synchronize_sched() + * to determine whether or not a full grace period has elapsed in the + * meantime. + */ +unsigned long get_state_synchronize_sched(void) +{ + /* + * Any prior manipulation of RCU-protected data must happen + * before the load from ->gpnum. + */ + smp_mb(); /* ^^^ */ + + /* + * Make sure this load happens before the purportedly + * time-consuming work between get_state_synchronize_sched() + * and cond_synchronize_sched(). + */ + return smp_load_acquire(&rcu_sched_state.gpnum); +} +EXPORT_SYMBOL_GPL(get_state_synchronize_sched); + +/** + * cond_synchronize_sched - Conditionally wait for an RCU-sched grace period + * + * @oldstate: return value from earlier call to get_state_synchronize_sched() + * + * If a full RCU-sched grace period has elapsed since the earlier call to + * get_state_synchronize_sched(), just return. Otherwise, invoke + * synchronize_sched() to wait for a full grace period. + * + * Yes, this function does not take counter wrap into account. But + * counter wrap is harmless. If the counter wraps, we have waited for + * more than 2 billion grace periods (and way more on a 64-bit system!), + * so waiting for one additional grace period should be just fine. + */ +void cond_synchronize_sched(unsigned long oldstate) +{ + unsigned long newstate; + + /* + * Ensure that this load happens before any RCU-destructive + * actions the caller might carry out after we return. + */ + newstate = smp_load_acquire(&rcu_sched_state.completed); + if (ULONG_CMP_GE(oldstate, newstate)) + synchronize_sched(); +} +EXPORT_SYMBOL_GPL(cond_synchronize_sched); + static int synchronize_sched_expedited_cpu_stop(void *data) { /*
| |