lkml.org 
[lkml]   [2013]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] sched: wake-affine throttle
Hi, Mike

Thanks for your reply.

On 05/03/2013 01:01 PM, Mike Galbraith wrote:
[snip]
>>
>> If this approach caused any concerns, please let me know ;-)
>
> I wonder if throttling on failure is the way to go. Note the minimal
> gain for pgbench with the default 1ms throttle interval. It's not very
> effective out of the box for the load type it's targeted to help, and
> people generally don't twiddle scheduler knobs. If you throttle on
> success, you directly restrict migration frequency without that being
> affected by what other tasks are doing. Seems that would be a bit more
> effective.

This is a good timing to make some conclusion for this problem ;-)

Let's suppose when wake-affine failed, next time slice got a higher
failure chance, then whether throttle on failure could be the question like:

throttle interval should cover more failure timing
or more success timing?

Obviously we should cover more failure timing, since it's just wasting
cycle and change nothing.

However, I used to concern about the damage of succeed wake-affine at
that rapid, sure it also contain the benefit, but which one is bigger?

Now if we look at the RFC version which throttle on succeed, for
pgbench, we could find that the default 1ms benefit is < 5%, while the
current version which throttle on failure bring 7% at most.

And that eliminate my concern :)

>
> (I still like the wakeup buddy thing, it's more effective because it
> adds and uses knowledge, though without the knob, cache domain size.
> Peter is right about the interrupt wakeups though, that could very
> easily cause regressions, dirt simple throttle is much safer).

Exactly, dark issue deserve dark solution, let darkness guide him...

Regards,
Michael Wang

>
> -Mike
>



\
 
 \ /
  Last update: 2013-05-03 08:41    [W:0.205 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site