lkml.org 
[lkml]   [2016]   [May]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] mutex: Report recursive ww_mutex locking early
From
Date
Op 26-05-16 om 22:08 schreef Chris Wilson:
> Recursive locking for ww_mutexes was originally conceived as an
> exception. However, it is heavily used by the DRM atomic modesetting
> code. Currently, the recursive deadlock is checked after we have queued
> up for a busy-spin and as we never release the lock, we spin until
> kicked, whereupon the deadlock is discovered and reported.
>
> A simple solution for the now common problem is to move the recursive
> deadlock discovery to the first action when taking the ww_mutex.
>
> Testcase: igt/kms_cursor_legacy
> Suggested-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: linux-kernel@vger.kernel.org
> ---
>
> Maarten suggested this as a simpler fix to the immediate problem. Imo,
> we still want to perform deadlock detection within the spin in order to
> catch more complicated deadlocks without osq_lock() forcing fairness!
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

Should this be Cc: stable@vger.kernel.org ?

I think in the normal case things would move forward even with osq_lock,
but you can make a separate patch to add it to mutex_can_spin_on_owner,
with the same comment as in mutex_optimistic_spin.
> ---
> kernel/locking/mutex.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index d60f1ba3e64f..1659398dc8f8 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -502,9 +502,6 @@ __ww_mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
> if (!hold_ctx)
> return 0;
>
> - if (unlikely(ctx == hold_ctx))
> - return -EALREADY;
> -
> if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
> (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
> #ifdef CONFIG_DEBUG_MUTEXES
> @@ -530,6 +527,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> unsigned long flags;
> int ret;
>
> + if (use_ww_ctx) {
> + struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
> + if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
> + return -EALREADY;
> + }
> +
> preempt_disable();
> mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
>


\
 
 \ /
  Last update: 2016-05-30 10:01    [W:0.101 / U:4.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site