lkml.org 
[lkml]   [2017]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: d1fc031747 ("sched/wait: assert the wait_queue_head lock is .."): EIP: __wake_up_common
On Wed, Dec 13, 2017 at 05:03:00PM -0800, Andrew Morton wrote:
> > sched/wait: assert the wait_queue_head lock is held in __wake_up_common
> >
> > Better ensure we actually hold the lock using lockdep than just commenting
> > on it. Due to the various exported _locked interfaces it is far too easy
> > to get the locking wrong.
>
> I'm probably sitting on an older version. I've dropped
>
> epoll: use the waitqueue lock to protect ep->wq
> sched/wait: assert the wait_queue_head lock is held in __wake_up_common

Looks pretty clear to me that userfaultfd is also abusing the wake_up_locked
interfaces:

spin_lock(&ctx->fault_pending_wqh.lock);
__wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, &range);
__wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, &range);
spin_unlock(&ctx->fault_pending_wqh.lock);

Sure, it's locked, but not by the lock you thought it was going to be.

There doesn't actually appear to be a bug here; fault_wqh is always serialised
by fault_pending_wqh.lock, but lockdep can't know that. I think this patch
will solve the problem.

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index ac9a4e65ca49..a39bc3237b68 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -879,7 +879,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
*/
spin_lock(&ctx->fault_pending_wqh.lock);
__wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL, &range);
- __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, &range);
+ __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, &range);
spin_unlock(&ctx->fault_pending_wqh.lock);

/* Flush pending events that may still wait on event_wqh */
@@ -1045,7 +1045,7 @@ static ssize_t userfaultfd_ctx_read(struct userfaultfd_ctx *ctx, int no_wait,
* anyway.
*/
list_del(&uwq->wq.entry);
- __add_wait_queue(&ctx->fault_wqh, &uwq->wq);
+ add_wait_queue(&ctx->fault_wqh, &uwq->wq);

write_seqcount_end(&ctx->refile_seq);

@@ -1194,7 +1194,7 @@ static void __wake_userfault(struct userfaultfd_ctx *ctx,
__wake_up_locked_key(&ctx->fault_pending_wqh, TASK_NORMAL,
range);
if (waitqueue_active(&ctx->fault_wqh))
- __wake_up_locked_key(&ctx->fault_wqh, TASK_NORMAL, range);
+ __wake_up(&ctx->fault_wqh, TASK_NORMAL, 1, range);
spin_unlock(&ctx->fault_pending_wqh.lock);
}

\
 
 \ /
  Last update: 2017-12-14 13:59    [W:0.128 / U:1.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site