lkml.org 
[lkml]   [2014]   [Sep]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 3/9] locktorture: Support mutexes
From
Date
On Fri, 2014-09-12 at 11:02 -0700, Paul E. McKenney wrote:
> On Thu, Sep 11, 2014 at 08:40:18PM -0700, Davidlohr Bueso wrote:
> > +static void torture_mutex_delay(struct torture_random_state *trsp)
> > +{
> > + const unsigned long longdelay_ms = 100;
> > +
> > + /* We want a long delay occasionally to force massive contention. */
> > + if (!(torture_random(trsp) %
> > + (nrealwriters_stress * 2000 * longdelay_ms)))
> > + mdelay(longdelay_ms * 5);
>
> So let's see... We wait 500 milliseconds about once per 200,000 operations
> per writer. So if we have 5 writers, we wait 500 milliseconds per million
> operations. So each writer will do about 200,000 operations, then there
> will be a half-second gap. But each short operation holds the lock for
> 20 milliseconds, which takes several hours to work through the million
> operations.
>
> So it looks to me like you are in massive contention state either way,
> at least until the next stutter interval shows up.
>
> Is that the intent? Or am I missing something here?

Ah, nice description. Yes, I am aiming for constant massive contention
(should have mentioned this, sorry). I believe it stresses the more
interesting parts of mutexes -- and rwsems, for that matter. If you
think it's excessive, we could decrease the the large wait and/or
increase the short one. I used the factor of the delay by the default
stutter value -- we could also make it always equal.

> > + else
> > + mdelay(longdelay_ms / 5);
> > +#ifdef CONFIG_PREEMPT
> > + if (!(torture_random(trsp) % (nrealwriters_stress * 20000)))
> > + preempt_schedule(); /* Allow test to be preempted. */
> > +#endif
> > +}
> > +
> > +static void torture_mutex_unlock(void) __releases(torture_mutex)
> > +{
> > + mutex_unlock(&torture_mutex);
> > +}
> > +
> > +static struct lock_torture_ops mutex_lock_ops = {
> > + .writelock = torture_mutex_lock,
> > + .write_delay = torture_mutex_delay,
> > + .writeunlock = torture_mutex_unlock,
> > + .name = "mutex_lock"
> > +};
> > +
> > /*
> > * Lock torture writer kthread. Repeatedly acquires and releases
> > * the lock, checking for duplicate acquisitions.
> > @@ -352,7 +389,7 @@ static int __init lock_torture_init(void)
> > int i;
> > int firsterr = 0;
> > static struct lock_torture_ops *torture_ops[] = {
> > - &lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops,
> > + &lock_busted_ops, &spin_lock_ops, &spin_lock_irq_ops, &mutex_lock_ops,
> > };
> >
> > if (!torture_init_begin(torture_type, verbose, &torture_runnable))
> > --
>
> And I queued the following patch to catch up the scripting.

Thanks! Completely overlooked the scripting bits. I'll keep it in mind
in the future.



\
 
 \ /
  Last update: 2014-09-12 21:21    [W:0.355 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site