lkml.org 
[lkml]   [2020]   [Mar]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 1/4] sched/fair: Track efficiency of select_idle_sibling
On Mon, Mar 23, 2020 at 01:30:10PM +0000, Valentin Schneider wrote:
>
> Hi Mel,
>
> On Fri, Mar 20 2020, Mel Gorman wrote:
> > SIS Search: Number of calls to select_idle_sibling
> >
> > SIS Domain Search: Number of times the domain was searched because the
> > fast path failed.
> >
> > SIS Scanned: Generally the number of runqueues scanned but the fast
> > path counts as 1 regardless of the values for target, prev
> > and recent.
> >
> > SIS Domain Scanned: Number of runqueues scanned during a search of the
> > LLC domain.
> >
> > SIS Failures: Number of SIS calls that failed to find an idle CPU
> >
>
> Let me put my changelog pedant hat on; it would be nice to explicitely
> separate the 'raw' stats (i.e. those that you are adding to sis()) to
> the downstream ones.
>
> AIUI the ones above here are the 'raw' stats (except "SIS Domain
> Scanned", I'm not sure I get where this one comes from?), and the ones
> below are the downstream, post-processed ones.
>

I can fix that up.

> > SIS Search Efficiency: A ratio expressed as a percentage of runqueues
> > scanned versus idle CPUs found. A 100% efficiency indicates that
> > the target, prev or recent CPU of a task was idle at wakeup. The
> > lower the efficiency, the more runqueues were scanned before an
> > idle CPU was found.
> >
> > SIS Domain Search Efficiency: Similar, except only for the slower SIS
> > patch.
> >
> > SIS Fast Success Rate: Percentage of SIS that used target, prev or
> > recent CPUs.
> >
> > SIS Success rate: Percentage of scans that found an idle CPU.
> >
> > Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
>
> With the nits taken into account:
>
> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
>
> > ---
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1dea8554ead0..9d32a81ece08 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6150,6 +6153,15 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> > struct sched_domain *sd;
> > int i, recent_used_cpu;
> >
> > + schedstat_inc(this_rq()->sis_search);
> > +
> > + /*
> > + * Checking if prev, target and recent is treated as one scan. A
> > + * perfect hit on one of those is considered 100% efficiency.
> > + * Further scanning impairs efficiency.
> > + */
> > + schedstat_inc(this_rq()->sis_scanned);
> > +
>
> You may want to move that sis_scanned increment to below the 'symmetric'
> label. Also, you should instrument select_idle_capacity() with
> sis_scanned increments, if only for the sake of completeness.
>

Yes, that would make more sense. Instrumenting select_idle_capacity is
trivial so I'll fix that up too.

> One last thing: each of the new schedstat_inc() callsites use this_rq();
> IIRC because of the RELOC_HIDE() hiding underneath there's very little
> chance of the compiler caching this. However, this depends on schedstat,
> so I suppose that is fine.
>

It's a deliberate choice so that when schedstat is disabled there is no
cost. While some schedstat sites lookup the current runqueue, not all of
them do. This might be a little wasteful when schedstats are enabled but
at least it's consistent.

Thanks

--
Mel Gorman
SUSE Labs
\
 
 \ /
  Last update: 2020-03-23 14:57    [W:0.056 / U:0.828 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site