lkml.org 
[lkml]   [2015]   [Jul]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH 0/4] locking/qrwlock: Improve qrwlock performance
Date
In converting some existing spinlocks to rwlock, it was found that the
write lock slowpath performance isn't as good as the qspinlock. With
a workload that added a large number of inodes to the superblock and
was rate-limited by the inode_sb_list_lock, converting that spinlock
into a write lock slowed down its performance from about 22s to 36s.

This patch series tries to squeeze out as much performance as possible
to close the performance gap between qspinlock and qrwlock. With
all that patches applies, the workload performance improves to about
24-25s which is much better than before, though still a bit slower
than the spinlock.

With this patch series in place, we can start converting some spinlocks
back to rwlocks where it makes sense and the lock size increase isn't
a concern.

Waiman Long (4):
locking/qrwlock: Better optimization for interrupt context readers
locking/qrwlock: Reduce reader/writer to reader lock transfer latency
locking/qrwlock: Reduce writer to writer lock transfer latency
locking/qrwlock: Use direct MCS lock/unlock in slowpath

arch/x86/include/asm/qrwlock.h | 4 +
include/asm-generic/qrwlock.h | 4 +-
include/asm-generic/qrwlock_types.h | 26 ++++-
kernel/locking/qrwlock.c | 185 ++++++++++++++++++++++++++--------
kernel/locking/qspinlock.c | 9 +-
5 files changed, 174 insertions(+), 54 deletions(-)



\
 
 \ /
  Last update: 2015-07-06 18:01    [W:0.091 / U:1.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site