lkml.org 
[lkml]   [2023]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] locktorture: Add raw_spinlock* torture tests for PREEMPT_RT kernels
On Sun, Feb 19, 2023 at 05:04:41AM +0000, Zhang, Qiang1 wrote:
>
> >On Wed, Feb 15, 2023 at 02:10:35PM +0800, Zqiang wrote:
> > For PREEMPT_RT kernel, the spin_lock, spin_lock_irq will converted
> > to sleepable rt_spin_lock and the interrupt related suffix for
> > spin_lock/unlock(_irq, irqsave/irqrestore) do not affect CPU's
> > interrupt state. this commit therefore add raw_spin_lock torture
> > tests, this is a strict spin lock implementation in RT kernels.
> >
> > Signed-off-by: Zqiang <qiang1.zhang@intel.com>
> >
> >A nice addition! Is this something you will be testing regularly?
> >If not, should there be additional locktorture scenarios, perhaps prefixed
> >by "RT-" to hint that they are not normally available?
> >
> >Or did you have some other plan for making use of these?
>
> Hi Paul
>
> Thanks for reply, in fact, I want to enrich the test of locktorture,
> after all, under the PREEMPT_RT kernel, we lost the test of the
> real spin lock.

Very well, how does the following look?

Thanx, Paul

------------------------------------------------------------------------

commit edc9d419ee8c22821ffd664466a5cf19208c3f02
Author: Zqiang <qiang1.zhang@intel.com>
Date: Wed Feb 15 14:10:35 2023 +0800

locktorture: Add raw_spinlock* torture tests for PREEMPT_RT kernels

In PREEMPT_RT kernels, both spin_lock() and spin_lock_irq() are converted
to sleepable rt_spin_lock(). This means that the interrupt related suffix
for spin_lock/unlock(_irq, irqsave/irqrestore) do not affect the CPU's
interrupt state. This commit therefore adds raw spin-lock torture tests.
This in turn permits pure spin locks to be tested in PREEMPT_RT kernels.

Signed-off-by: Zqiang <qiang1.zhang@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
index 9425aff089365..ed8e5baafe49f 100644
--- a/kernel/locking/locktorture.c
+++ b/kernel/locking/locktorture.c
@@ -257,6 +257,61 @@ static struct lock_torture_ops spin_lock_irq_ops = {
.name = "spin_lock_irq"
};

+#ifdef CONFIG_PREEMPT_RT
+static DEFINE_RAW_SPINLOCK(torture_raw_spinlock);
+
+static int torture_raw_spin_lock_write_lock(int tid __maybe_unused)
+__acquires(torture_raw_spinlock)
+{
+ raw_spin_lock(&torture_raw_spinlock);
+ return 0;
+}
+
+static void torture_raw_spin_lock_write_unlock(int tid __maybe_unused)
+__releases(torture_raw_spinlock)
+{
+ raw_spin_unlock(&torture_raw_spinlock);
+}
+
+static struct lock_torture_ops raw_spin_lock_ops = {
+ .writelock = torture_raw_spin_lock_write_lock,
+ .write_delay = torture_spin_lock_write_delay,
+ .task_boost = torture_rt_boost,
+ .writeunlock = torture_raw_spin_lock_write_unlock,
+ .readlock = NULL,
+ .read_delay = NULL,
+ .readunlock = NULL,
+ .name = "raw_spin_lock"
+};
+
+static int torture_raw_spin_lock_write_lock_irq(int tid __maybe_unused)
+__acquires(torture_raw_spinlock)
+{
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&torture_raw_spinlock, flags);
+ cxt.cur_ops->flags = flags;
+ return 0;
+}
+
+static void torture_raw_spin_lock_write_unlock_irq(int tid __maybe_unused)
+__releases(torture_raw_spinlock)
+{
+ raw_spin_unlock_irqrestore(&torture_raw_spinlock, cxt.cur_ops->flags);
+}
+
+static struct lock_torture_ops raw_spin_lock_irq_ops = {
+ .writelock = torture_raw_spin_lock_write_lock_irq,
+ .write_delay = torture_spin_lock_write_delay,
+ .task_boost = torture_rt_boost,
+ .writeunlock = torture_raw_spin_lock_write_unlock_irq,
+ .readlock = NULL,
+ .read_delay = NULL,
+ .readunlock = NULL,
+ .name = "raw_spin_lock_irq"
+};
+#endif // #ifdef CONFIG_PREEMPT_RT
+
static DEFINE_RWLOCK(torture_rwlock);

static int torture_rwlock_write_lock(int tid __maybe_unused)
@@ -1017,6 +1072,9 @@ static int __init lock_torture_init(void)
static struct lock_torture_ops *torture_ops[] = {
&lock_busted_ops,
&spin_lock_ops, &spin_lock_irq_ops,
+#ifdef CONFIG_PREEMPT_RT
+ &raw_spin_lock_ops, &raw_spin_lock_irq_ops,
+#endif // #ifdef CONFIG_PREEMPT_RT
&rw_lock_ops, &rw_lock_irq_ops,
&mutex_lock_ops,
&ww_mutex_lock_ops,
\
 
 \ /
  Last update: 2023-03-27 00:33    [W:0.083 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site