lkml.org 
[lkml]   [2017]   [Apr]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -v6 10/13] futex,rt_mutex: Restructure rt_mutex_finish_proxy_lock()
On Wed, Mar 22, 2017 at 11:35:57AM +0100, Peter Zijlstra wrote:
> With the ultimate goal of keeping rt_mutex wait_list and futex_q
> waiters consistent we want to split 'rt_mutex_futex_lock()' into finer

I want to be clear that I understand why this patch is needed - as it actually
moves both the waiter removal and the rt_waiter freeing under the hb lock while
you've been working to be less dependent on the hb lock.

Was inconsistency of the rt_mutex wait_list and the futex_q waiters a problem
before this patch series, or do the previous patches make this one necessary?

It makes sense that for the two to be consistent they should be manipulated
under a common lock.

> parts, such that only the actual blocking can be done without hb->lock
> held.
>
> This means we need to split rt_mutex_finish_proxy_lock() into two
> parts, one that does the blocking and one that does remove_waiter()
> when we fail to acquire.
>
> When we do acquire, we can safely remove ourselves, since there is no
> concurrency on the lock owner.
>
> This means that, except for futex_lock_pi(), all wait_list
> modifications are done with both hb->lock and wait_lock held.
>
> [bigeasy@linutronix.de: fix for futex_requeue_pi_signal_restart]
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> kernel/futex.c | 7 +++--
> kernel/locking/rtmutex.c | 53 ++++++++++++++++++++++++++++++++++------
> kernel/locking/rtmutex_common.h | 8 +++---
> 3 files changed, 56 insertions(+), 12 deletions(-)
>
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -3032,10 +3032,13 @@ static int futex_wait_requeue_pi(u32 __u
> */
> WARN_ON(!q.pi_state);
> pi_mutex = &q.pi_state->pi_mutex;
> - ret = rt_mutex_finish_proxy_lock(pi_mutex, to, &rt_waiter);
> - debug_rt_mutex_free_waiter(&rt_waiter);
> + ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
>
> spin_lock(q.lock_ptr);
> + if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
> + ret = 0;
> +
> + debug_rt_mutex_free_waiter(&rt_waiter);
> /*
> * Fixup the pi_state owner and possibly acquire the lock if we
> * haven't already.
> --- a/kernel/locking/rtmutex.c
> +++ b/kernel/locking/rtmutex.c
> @@ -1753,21 +1753,23 @@ struct task_struct *rt_mutex_next_owner(
> }
>
> /**
> - * rt_mutex_finish_proxy_lock() - Complete lock acquisition
> + * rt_mutex_wait_proxy_lock() - Wait for lock acquisition
> * @lock: the rt_mutex we were woken on
> * @to: the timeout, null if none. hrtimer should already have
> * been started.
> * @waiter: the pre-initialized rt_mutex_waiter
> *
> - * Complete the lock acquisition started our behalf by another thread.
> + * Wait for the the lock acquisition started on our behalf by
> + * rt_mutex_start_proxy_lock(). Upon failure, the caller must call
> + * rt_mutex_cleanup_proxy_lock().
> *
> * Returns:
> * 0 - success
> * <0 - error, one of -EINTR, -ETIMEDOUT
> *
> - * Special API call for PI-futex requeue support
> + * Special API call for PI-futex support
> */
> -int rt_mutex_finish_proxy_lock(struct rt_mutex *lock,
> +int rt_mutex_wait_proxy_lock(struct rt_mutex *lock,
> struct hrtimer_sleeper *to,
> struct rt_mutex_waiter *waiter)
> {
> @@ -1780,9 +1782,6 @@ int rt_mutex_finish_proxy_lock(struct rt
> /* sleep on the mutex */
> ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter);
>
> - if (unlikely(ret))
> - remove_waiter(lock, waiter);
> -
> /*
> * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
> * have to fix that up.
> @@ -1793,3 +1792,43 @@ int rt_mutex_finish_proxy_lock(struct rt
>
> return ret;
> }
> +
> +/**
> + * rt_mutex_cleanup_proxy_lock() - Cleanup failed lock acquisition
> + * @lock: the rt_mutex we were woken on
> + * @waiter: the pre-initialized rt_mutex_waiter
> + *
> + * Attempt to clean up after a failed rt_mutex_wait_proxy_lock().
> + *
> + * Unless we acquired the lock; we're still enqueued on the wait-list and can
> + * in fact still be granted ownership until we're removed. Therefore we can
> + * find we are in fact the owner and must disregard the
> + * rt_mutex_wait_proxy_lock() failure.
> + *
> + * Returns:
> + * true - did the cleanup, we done.
> + * false - we acquired the lock after rt_mutex_wait_proxy_lock() returned,
> + * caller should disregards its return value.
> + *
> + * Special API call for PI-futex support
> + */
> +bool rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock,
> + struct rt_mutex_waiter *waiter)
> +{
> + bool cleanup = false;
> +
> + raw_spin_lock_irq(&lock->wait_lock);
> + /*
> + * Unless we're the owner; we're still enqueued on the wait_list.
> + * So check if we became owner, if not, take us off the wait_list.
> + */
> + if (rt_mutex_owner(lock) != current) {
> + remove_waiter(lock, waiter);
> + fixup_rt_mutex_waiters(lock);
> + cleanup = true;
> + }
> + raw_spin_unlock_irq(&lock->wait_lock);
> +
> + return cleanup;
> +}
> +
> --- a/kernel/locking/rtmutex_common.h
> +++ b/kernel/locking/rtmutex_common.h
> @@ -107,9 +107,11 @@ extern void rt_mutex_init_waiter(struct
> extern int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
> struct rt_mutex_waiter *waiter,
> struct task_struct *task);
> -extern int rt_mutex_finish_proxy_lock(struct rt_mutex *lock,
> - struct hrtimer_sleeper *to,
> - struct rt_mutex_waiter *waiter);
> +extern int rt_mutex_wait_proxy_lock(struct rt_mutex *lock,
> + struct hrtimer_sleeper *to,
> + struct rt_mutex_waiter *waiter);
> +extern bool rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock,
> + struct rt_mutex_waiter *waiter);
>
> extern int rt_mutex_timed_futex_lock(struct rt_mutex *l, struct hrtimer_sleeper *to);
> extern int rt_mutex_futex_trylock(struct rt_mutex *l);
>
>
>

--
Darren Hart
VMware Open Source Technology Center

\
 
 \ /
  Last update: 2017-04-08 01:32    [W:1.213 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site