lkml.org 
[lkml]   [2018]   [Jul]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] locking/rwsem: Take read lock immediate if empty queue with no writer
On Tue, Jul 10, 2018 at 02:31:30PM -0400, Waiman Long wrote:
> It was found that a constant stream of readers might cause the count to
> go negative most of the time after an initial trigger by a writer even
> if no writer was present afterward. As a result, most of the readers
> would have to go through the slowpath reducing their performance.
>
> To avoid that from happening, an additional check is added to detect
> the special case that the reader in the critical section is the only
> one in the wait queue and no writer is present. When that happens, it
> can just have the lock and return immediately without further action.
> Other incoming readers won't see a waiter is present and be forced
> into the slowpath.
>
> After the list_empty() calls, the CPU should have the lock cacheline
> anyway, so an additional semaphore count check shouldn't have any
> performance impact.
>
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
> kernel/locking/rwsem-xadd.c | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)

This looks ok to me, but it would be nice to include some performance
figures in the commit log. Do you have any? Phrases such as "shouldn't have
any performance impact" and "probably generate better code" don't fill me
with good feelings ;)

Will

> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index 3064c50..ef8a5f3 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -233,8 +233,22 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
> waiter.type = RWSEM_WAITING_FOR_READ;
>
> raw_spin_lock_irq(&sem->wait_lock);
> - if (list_empty(&sem->wait_list))
> + if (list_empty(&sem->wait_list)) {
> + /*
> + * In the unlikely event that the task is the only one in
> + * the wait queue and a writer isn't present, it can have
> + * the lock and return immediately without going through
> + * the remaining slowpath code.
> + *
> + * Count won't be 0, but allowing it will probably generate
> + * better code.
> + */
> + if (unlikely(atomic_long_read(&sem->count) >= 0)) {
> + raw_spin_unlock_irq(&sem->wait_lock);
> + return sem;
> + }
> adjustment += RWSEM_WAITING_BIAS;
> + }
> list_add_tail(&waiter.list, &sem->wait_list);
>
> /* we're now waiting on the lock, but no longer actively locking */
> --
> 1.8.3.1
>
\
 
 \ /
  Last update: 2018-07-15 22:06    [W:0.037 / U:0.508 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site