lkml.org 
[lkml]   [2016]   [Feb]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 1/1] intel_pstate: Increase hold-off time before busyness is scaled
From
On Fri, Feb 19, 2016 at 12:29 AM, Pandruvada, Srinivas
<srinivas.pandruvada@intel.com> wrote:
> On Thu, 2016-02-18 at 20:43 +0100, Rafael J. Wysocki wrote:
>> Hi Mel,
>>
>> On Thu, Feb 18, 2016 at 12:11 PM, Mel Gorman
>> <mgorman@techsingularity.net> wrote:
>>
>> [cut]
>>
>> >
>> > Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
>> > ---
>> > drivers/cpufreq/intel_pstate.c | 2 +-
>> > 1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/drivers/cpufreq/intel_pstate.c
>> > b/drivers/cpufreq/intel_pstate.c
>> > index cd83d477e32d..54250084174a 100644
>> > --- a/drivers/cpufreq/intel_pstate.c
>> > +++ b/drivers/cpufreq/intel_pstate.c
>> > @@ -999,7 +999,7 @@ static inline int32_t
>> > get_target_pstate_use_performance(struct cpudata *cpu)
>> > sample_time = pid_params.sample_rate_ms * USEC_PER_MSEC;
>> > duration_us = ktime_us_delta(cpu->sample.time,
>> > cpu->last_sample_time);
>> > - if (duration_us > sample_time * 3) {
>> > + if (duration_us > sample_time * 12) {
>> > sample_ratio = div_fp(int_tofp(sample_time),
>> > int_tofp(duration_us));
>> > core_busy = mul_fp(core_busy, sample_ratio);
>> > --
>>
>> I've been considering making a change like this, but I wasn't quite
>> sure how much greater the multiplier should be, so I've queued this
>> one up for 4.6.
>>
> We need to test power impact on different server workloads. So please
> hold on.
> We have server folks complaining that we already consume too much
> power.

I'll drop the commit if it turns out to cause too much energy to be consumed.

Thanks,
Rafael

\
 
 \ /
  Last update: 2016-02-19 01:01    [W:0.079 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site