Messages in this thread Patch in this message | | | From | Davidlohr Bueso <> | Subject | [PATCH -tip/master 1/7] locking/mutex: Unify arguments in lock/unlock slowpaths | Date | Sun, 27 Jul 2014 22:18:38 -0700 |
| |
Just how the locking-end behaves, when unlocking, go ahead and obtain the proper data structure immediately after the previous (asm-end) call exits and there are (probably) pending waiters. This simplifies a bit some of the layering.
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> --- kernel/locking/mutex.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index ae712b2..ad0e333 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -679,9 +679,8 @@ EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible); * Release the lock, slowpath: */ static inline void -__mutex_unlock_common_slowpath(atomic_t *lock_count, int nested) +__mutex_unlock_common_slowpath(struct mutex *lock, int nested) { - struct mutex *lock = container_of(lock_count, struct mutex, count); unsigned long flags; /* @@ -716,7 +715,9 @@ __mutex_unlock_common_slowpath(atomic_t *lock_count, int nested) __visible void __mutex_unlock_slowpath(atomic_t *lock_count) { - __mutex_unlock_common_slowpath(lock_count, 1); + struct mutex *lock = container_of(lock_count, struct mutex, count); + + __mutex_unlock_common_slowpath(lock, 1); } #ifndef CONFIG_DEBUG_LOCK_ALLOC -- 1.8.1.4
| |