lkml.org 
[lkml]   [2023]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: rq lock contention due to commit af7f588d8f73
On Tue, Mar 28, 2023 at 08:39:41AM -0400, Mathieu Desnoyers wrote:
> On 2023-03-28 02:58, Aaron Lu wrote:
> > On Mon, Mar 27, 2023 at 03:57:43PM -0400, Mathieu Desnoyers wrote:
> > > I've just resuscitated my per-runqueue concurrency ID cache patch from an older
> > > patchset, and posted it as RFC. So far it passed one round of rseq selftests. Can
> > > you test it in your environment to see if I'm on the right track ?
> > >
> > > https://lore.kernel.org/lkml/20230327195318.137094-1-mathieu.desnoyers@efficios.com/
> >
> > There are improvements with this patch.
> >
> > When running the client side sysbench with nr_thread=56, the lock contention
> > is gone%; with nr_thread=224(=nr_cpu of this machine), the lock contention
> > dropped from 75% to 27%.
>
> This is a good start!

Yes it is.

>
> Can you compare this with Peter's approach to modify init/Kconfig, make
> SCHED_MM_CID a bool, and set it =n in the kernel config ?

I did it yesterday and IIRC, when SCHED_MM_CID is disabled then lock
contention is also gone for nr_thread=224.

>
> I just want to see what baseline we should compare against.

Baseline is, when there is no cid_lock, there is (almost) no lock
contention for this workload :-)

>
> Another test we would want to try here: there is an arbitrary choice for the
> runqueue cache array size in my own patch:
>
> kernel/sched/sched.h:
> # define RQ_CID_CACHE_SIZE 8
>
> Can you try changing this value for 16 or 32 instead and see if it helps?

Yes sure.

Can't promise I can do this tonight but should be able to finish them
tomorrow.

Thanks,
Aaron

\
 
 \ /
  Last update: 2023-03-28 15:09    [W:1.002 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site