lkml.org 
[lkml]   [2009]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] sched: Teach might_sleep about preemptable rcu
    On Mon, Dec 14, 2009 at 11:44:32PM +0100, Frederic Weisbecker wrote:
    > In practice, it is harmless to voluntarily sleep in a rcu_read_lock()
    > section if we are running under preempt rcu, but it is illegal because
    > if we build a kernel running non-preemptable rcu.
    >
    > Currently, might_sleep() doesn't notice sleepable operations under
    > rcu_read_lock() sections if we are running under preemptable rcu
    > because preempt_count() is left untouched after rcu_read_lock() in
    > this case. But we want developers who test their changes under such
    > config to notice the "sleeping while atomic" issues.
    >
    > Then we add rcu_read_lock_nesting to prempt_count() in might_sleep()
    > checks.

    Cute!!!

    Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

    > Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
    > Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
    > Cc: Peter Zijlstra <peterz@infradead.org>
    > ---
    > include/linux/rcutree.h | 11 +++++++++++
    > kernel/sched.c | 2 +-
    > 2 files changed, 12 insertions(+), 1 deletions(-)
    >
    > diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
    > index c93eee5..8044b1b 100644
    > --- a/include/linux/rcutree.h
    > +++ b/include/linux/rcutree.h
    > @@ -45,6 +45,12 @@ extern void __rcu_read_unlock(void);
    > extern void synchronize_rcu(void);
    > extern void exit_rcu(void);
    >
    > +/*
    > + * Defined as macro as it is a very low level header
    > + * included from areas that don't even know about current
    > + */
    > +#define rcu_preempt_depth() (current->rcu_read_lock_nesting)
    > +
    > #else /* #ifdef CONFIG_TREE_PREEMPT_RCU */
    >
    > static inline void __rcu_read_lock(void)
    > @@ -63,6 +69,11 @@ static inline void exit_rcu(void)
    > {
    > }
    >
    > +static inline int rcu_preempt_depth(void)
    > +{
    > + return 0;
    > +}
    > +
    > #endif /* #else #ifdef CONFIG_TREE_PREEMPT_RCU */
    >
    > static inline void __rcu_read_lock_bh(void)
    > diff --git a/kernel/sched.c b/kernel/sched.c
    > index ab42754..586c82c 100644
    > --- a/kernel/sched.c
    > +++ b/kernel/sched.c
    > @@ -9658,7 +9658,7 @@ void __init sched_init(void)
    > #ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
    > static inline int preempt_count_equals(int preempt_offset)
    > {
    > - int nested = preempt_count() & ~PREEMPT_ACTIVE;
    > + int nested = (preempt_count() & ~PREEMPT_ACTIVE) + rcu_preempt_depth();
    >
    > return (nested == PREEMPT_INATOMIC_BASE + preempt_offset);
    > }
    > --
    > 1.6.2.3
    >


    \
     
     \ /
      Last update: 2009-12-15 00:05    [W:0.030 / U:3.200 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site