lkml.org 
[lkml]   [2009]   [Jan]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH][RFC]: mutex: adaptive spin


On Tue, 6 Jan 2009, Peter Zijlstra wrote:
>
> Indeed, the below does boot -- which means I get to sleep now ;-)

Well, if you didn't go to sleep, a few more questions..

> int __sched
> mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass)
> {
> + int ret;
> +
> might_sleep();
> - return __mutex_lock_common(lock, TASK_KILLABLE, subclass, _RET_IP_);
> + ret = __mutex_lock_common(lock, TASK_KILLABLE, subclass, _RET_IP_);
> + if (!ret)
> + lock->owner = current;
> +
> + return ret;

This looks ugly. Why doesn't __mutex_lock_common() just set the lock
owner? Hate seeing it done in the caller that has to re-compute common
(yeah, yeah, it's cheap) and just looks ugly.

IOW, why didn't this just get done with something like

--- a/kernel/mutex.c
+++ b/kernel/mutex.c
@@ -186,6 +186,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
done:
lock_acquired(&lock->dep_map, ip);
/* got the lock - rejoice! */
+ lock->owner = task;
mutex_remove_waiter(lock, &waiter, task_thread_info(task));
debug_mutex_set_owner(lock, task_thread_info(task));

instead? That takes care of all callers, including the conditional thing
(since the error case is a totally different path).

Linus


\
 
 \ /
  Last update: 2009-01-07 00:05    [W:0.386 / U:1.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site