lkml.org 
[lkml]   [2020]   [Mar]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Date
SubjectRe: [locks] 6d390e4b5d: will-it-scale.per_process_ops -96.6% regression
On Mon, Mar 9, 2020 at 7:36 AM Jeff Layton <jlayton@kernel.org> wrote:
>
> On Sun, 2020-03-08 at 22:03 +0800, kernel test robot wrote:
> >
> > FYI, we noticed a -96.6% regression of will-it-scale.per_process_ops due to commit:
>
> This is not completely unexpected as we're banging on the global
> blocked_lock_lock now for every unlock. This test just thrashes file
> locks and unlocks without doing anything in between, so the workload
> looks pretty artificial [1].
>
> It would be nice to avoid the global lock in this codepath, but it
> doesn't look simple to do. I'll keep thinking about it, but for now I'm
> inclined to ignore this result unless we see a problem in more realistic
> workloads.

That is a _huge_ regression, though.

What about something like the attached? Wouldn't that work? And make
the code actually match the old comment about wow "fl_blocker" being
NULL being special.

The old code seemed to not know about things like memory ordering either.

Patch is entirely untested, but aims to have that "smp_store_release()
means I'm done and not going to touch it any more", making that
smp_load_acquire() test hopefully be valid as per the comment..

Hmm?

Linus
fs/locks.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/fs/locks.c b/fs/locks.c
index 426b55d333d5..bc5ca54a0749 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -725,7 +725,6 @@ static void __locks_delete_block(struct file_lock *waiter)
{
locks_delete_global_blocked(waiter);
list_del_init(&waiter->fl_blocked_member);
- waiter->fl_blocker = NULL;
}

static void __locks_wake_up_blocks(struct file_lock *blocker)
@@ -740,6 +739,12 @@ static void __locks_wake_up_blocks(struct file_lock *blocker)
waiter->fl_lmops->lm_notify(waiter);
else
wake_up(&waiter->fl_wait);
+
+ /*
+ * Tell the world we're done with it - see comment at
+ * top of locks_delete_block().
+ */
+ smp_store_release(&waiter->fl_blocker, NULL);
}
}

@@ -753,11 +758,33 @@ int locks_delete_block(struct file_lock *waiter)
{
int status = -ENOENT;

+ /*
+ * If fl_blocker is NULL, it won't be set again as this thread
+ * "owns" the lock and is the only one that might try to claim
+ * the lock. So it is safe to test fl_blocker locklessly.
+ * Also if fl_blocker is NULL, this waiter is not listed on
+ * fl_blocked_requests for some lock, so no other request can
+ * be added to the list of fl_blocked_requests for this
+ * request. So if fl_blocker is NULL, it is safe to
+ * locklessly check if fl_blocked_requests is empty. If both
+ * of these checks succeed, there is no need to take the lock.
+ */
+ if (!smp_load_acquire(&waiter->fl_blocker)) {
+ if (list_empty(&waiter->fl_blocked_requests))
+ return status;
+ }
+
spin_lock(&blocked_lock_lock);
if (waiter->fl_blocker)
status = 0;
__locks_wake_up_blocks(waiter);
__locks_delete_block(waiter);
+
+ /*
+ * Tell the world we're done with it - see commit at top
+ * of this function
+ */
+ smp_store_release(&waiter->fl_blocker, NULL);
spin_unlock(&blocked_lock_lock);
return status;
}
\
 
 \ /
  Last update: 2020-03-09 16:53    [W:0.585 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site