lkml.org 
[lkml]   [2021]   [Jul]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[patch 49/50] locking/rtmutex: Implement equal priority lock stealing
    From: Gregory Haskins <ghaskins@novell.com>

    The current logic only allows lock stealing to occur if the current task is
    of higher priority than the pending owner.

    Signficant throughput improvements can be gained by allowing the lock
    stealing to include tasks of equal priority when the contended lock is a
    spin_lock or a rw_lock and the tasks are not in a RT scheduling task.

    The assumption was that the system will make faster progress by allowing
    the task already on the CPU to take the lock rather than waiting for the
    system to wake up a different task.

    This does add a degree of unfairness, but in reality no negative side
    effects have been observed in the many years that this has been used in the
    RT kernel.

    [ tglx: Refactored and rewritten several times by Steve Rostedt, Sebastian
    Siewior and myself ]

    Signed-off-by: Gregory Haskins <ghaskins@novell.com>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    ---
    kernel/locking/rtmutex.c | 52 +++++++++++++++++++++++++++++++----------------
    1 file changed, 35 insertions(+), 17 deletions(-)
    ---
    --- a/kernel/locking/rtmutex.c
    +++ b/kernel/locking/rtmutex.c
    @@ -286,6 +286,26 @@ static __always_inline int rt_mutex_wait
    return 1;
    }

    +static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
    + struct rt_mutex_waiter *top_waiter)
    +{
    + if (rt_mutex_waiter_less(waiter, top_waiter))
    + return true;
    +
    +#ifdef RT_MUTEX_BUILD_SPINLOCKS
    + /*
    + * Note that RT tasks are excluded from same priority (lateral)
    + * steals to prevent the introduction of an unbounded latency.
    + */
    + if (rt_prio(waiter->prio) || dl_prio(waiter->prio))
    + return false;
    +
    + return rt_mutex_waiter_equal(waiter, top_waiter);
    +#else
    + return false;
    +#endif
    +}
    +
    #define __node_2_waiter(node) \
    rb_entry((node), struct rt_mutex_waiter, tree_entry)

    @@ -858,19 +878,21 @@ try_to_take_rt_mutex(struct rt_mutex *lo
    * trylock attempt.
    */
    if (waiter) {
    - /*
    - * If waiter is not the highest priority waiter of
    - * @lock, give up.
    - */
    - if (waiter != rt_mutex_top_waiter(lock))
    - return 0;
    + struct rt_mutex_waiter *top_waiter = rt_mutex_top_waiter(lock);

    /*
    - * We can acquire the lock. Remove the waiter from the
    - * lock waiters tree.
    + * If waiter is the highest priority waiter of @lock,
    + * or allowed to steal it, take it over.
    */
    - rt_mutex_dequeue(lock, waiter);
    -
    + if (waiter == top_waiter || rt_mutex_steal(waiter, top_waiter)) {
    + /*
    + * We can acquire the lock. Remove the waiter from the
    + * lock waiters tree.
    + */
    + rt_mutex_dequeue(lock, waiter);
    + } else {
    + return 0;
    + }
    } else {
    /*
    * If the lock has waiters already we check whether @task is
    @@ -881,13 +903,9 @@ try_to_take_rt_mutex(struct rt_mutex *lo
    * not need to be dequeued.
    */
    if (rt_mutex_has_waiters(lock)) {
    - /*
    - * If @task->prio is greater than or equal to
    - * the top waiter priority (kernel view),
    - * @task lost.
    - */
    - if (!rt_mutex_waiter_less(task_to_waiter(task),
    - rt_mutex_top_waiter(lock)))
    + /* Check whether the trylock can steal it. */
    + if (!rt_mutex_steal(task_to_waiter(task),
    + rt_mutex_top_waiter(lock)))
    return 0;

    /*
    \
     
     \ /
      Last update: 2021-07-13 18:15    [W:2.889 / U:0.076 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site