lkml.org 
[lkml]   [2011]   [Sep]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH tip/core/rcu 41/55] rcu: Permit rt_mutex_unlock() with irqs disabled
    Date
    From: Paul E. McKenney <paul.mckenney@linaro.org>

    Create a separate lockdep class for the rt_mutex used for RCU priority
    boosting and enable use of rt_mutex_lock() with irqs disabled. This
    prevents RCU priority boosting from falling prey to deadlocks when
    someone begins an RCU read-side critical section in preemptible state,
    but releases it with an irq-disabled lock held.

    Unfortunately, the scheduler's runqueue and priority-inheritance locks
    still must either completely enclose or be completely enclosed by any
    overlapping RCU read-side critical section.

    Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    ---
    kernel/rcutree_plugin.h | 6 ++++++
    kernel/rtmutex.c | 8 ++++++++
    2 files changed, 14 insertions(+), 0 deletions(-)

    diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
    index d3127e8..f6c63ea 100644
    --- a/kernel/rcutree_plugin.h
    +++ b/kernel/rcutree_plugin.h
    @@ -1149,6 +1149,8 @@ static void rcu_initiate_boost_trace(struct rcu_node *rnp)

    #endif /* #else #ifdef CONFIG_RCU_TRACE */

    +static struct lock_class_key rcu_boost_class;
    +
    /*
    * Carry out RCU priority boosting on the task indicated by ->exp_tasks
    * or ->boost_tasks, advancing the pointer to the next task in the
    @@ -1211,10 +1213,14 @@ static int rcu_boost(struct rcu_node *rnp)
    */
    t = container_of(tb, struct task_struct, rcu_node_entry);
    rt_mutex_init_proxy_locked(&mtx, t);
    + /* Avoid lockdep false positives. This rt_mutex is its own thing. */
    + lockdep_set_class_and_name(&mtx.wait_lock, &rcu_boost_class,
    + "rcu_boost_mutex");
    t->rcu_boost_mutex = &mtx;
    raw_spin_unlock_irqrestore(&rnp->lock, flags);
    rt_mutex_lock(&mtx); /* Side effect: boosts task t's priority. */
    rt_mutex_unlock(&mtx); /* Keep lockdep happy. */
    + local_irq_restore(flags);

    return rnp->exp_tasks != NULL || rnp->boost_tasks != NULL;
    }
    diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
    index ab44911..2548f44 100644
    --- a/kernel/rtmutex.c
    +++ b/kernel/rtmutex.c
    @@ -579,6 +579,7 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
    struct rt_mutex_waiter *waiter)
    {
    int ret = 0;
    + int was_disabled;

    for (;;) {
    /* Try to acquire the lock: */
    @@ -601,10 +602,17 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,

    raw_spin_unlock(&lock->wait_lock);

    + was_disabled = irqs_disabled();
    + if (was_disabled)
    + local_irq_enable();
    +
    debug_rt_mutex_print_deadlock(waiter);

    schedule_rt_mutex(lock);

    + if (was_disabled)
    + local_irq_disable();
    +
    raw_spin_lock(&lock->wait_lock);
    set_current_state(state);
    }
    --
    1.7.3.2


    \
     
     \ /
      Last update: 2011-09-07 08:27    [W:0.031 / U:59.544 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site