lkml.org 
[lkml]   [2013]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] sched: wakeup buddy

* Michael Wang <wangyun@linux.vnet.ibm.com> wrote:

> On 03/11/2013 05:40 PM, Ingo Molnar wrote:
> >
> > * Michael Wang <wangyun@linux.vnet.ibm.com> wrote:
> >
> >> Hi, Ingo
> >>
> >> On 03/11/2013 04:21 PM, Ingo Molnar wrote:
> >> [snip]
> >>>
> >>> I have actually written the prctl() approach before, for instrumentation
> >>> purposes, and it does wonders to system analysis.
> >>
> >> The idea sounds great, we could get many new info to implement more
> >> smart scheduler, that's amazing :)
> >>
> >>>
> >>> Any objections?
> >>
> >> Just one concern, may be I have misunderstand you, but will it cause
> >> trouble if the prctl() was indiscriminately used by some applications,
> >> will we get fake data?
> >
> > It's their problem: overusing it will increase their CPU overhead. The two
> > boundary worst-cases are that they either call it too frequently or too
> > rarely:
> >
> > - too frequently: it approximates the current cpu-runtime work metric
> >
> > - too infrequently: we just ignore it and fall back to a runtime metric
> > if it does not change.
> >
> > It's not like it can be used to get preferential treatment - we don't ever
> > balance other tasks against these tasks based on work throughput, we try
> > to maximize this workload's work throughput.
> >
> > What could happen is if an app is 'optimized' for a buggy scheduler by
> > changing the work metric frequency. We offer no guarantee - apps will be
> > best off (and users will be least annoyed) if apps honestly report their
> > work metric.
> >
> > Instrumentation/stats/profiling will also double check the correctness of
> > this data: if developers/users start relying on the work metric as a
> > substitute benchmark number, then app writers will have an additional
> > incentive to make them correct.
>
> I see, I could not figure out how to wisely using the info currently,
> but I have the feeling that it will make scheduler very different ;-)
>
> May be we could implement the API and get those info ready firstly
> (along with the new sched-pipe which provide work tick info), then think
> about the way to use them in scheduler, is there any patches on the way?

Absolutely.

Beyond the new prctl no new API is needed: a perf soft event could be
added, and/or a tracepoint. Then perf stat and perf record could be used
with it. 'perf bench' could be extended to generate the work tick in its
'perf bench sched ...' workloads - and for 'perf bench mem numa' as well.

vsyscall-accelerating it could be a separate, more complex step: it needs
a per thread writable vsyscall data area to make the overhead to
applications near zero. Performance critical apps won't call an extra
syscall.

Thanks,

Ingo


\
 
 \ /
  Last update: 2013-03-12 10:21    [W:0.079 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site