lkml.org 
[lkml]   [2010]   [Oct]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 3/4] sched: drop group_capacity to 1 only if local group has extra capacity
On Fri, Oct 15, 2010 at 10:05 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Fri, 2010-10-15 at 09:13 -0700, Nikhil Rao wrote:
>
>> >> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
>> >> index 0dd1021..da0c688 100644
>> >> --- a/kernel/sched_fair.c
>> >> +++ b/kernel/sched_fair.c
>> >> @@ -2030,6 +2030,7 @@ struct sd_lb_stats {
>> >>       unsigned long this_load;
>> >>       unsigned long this_load_per_task;
>> >>       unsigned long this_nr_running;
>> >> +     unsigned long this_group_capacity;
>> >>
>> >>       /* Statistics of the busiest group */
>> >>       unsigned long max_load;
>> >> @@ -2546,15 +2547,18 @@ static inline void update_sd_lb_stats(struct sched_domain *sd, int this_cpu,
>> >>               /*
>> >>                * In case the child domain prefers tasks go to siblings
>> >>                * first, lower the sg capacity to one so that we'll try
>> >> -              * and move all the excess tasks away.
>> >> +              * and move all the excess tasks away. We lower capacity only
>> >> +              * if the local group can handle the extra capacity.
>> >>                */
>> >> -             if (prefer_sibling)
>> >> +             if (prefer_sibling && !local_group &&
>> >> +                 sds->this_nr_running < sds->this_group_capacity)
>> >>                       sgs.group_capacity = min(sgs.group_capacity, 1UL);
>> >>
>> >>               if (local_group) {
>> >>                       sds->this_load = sgs.avg_load;
>> >>                       sds->this = sg;
>> >>                       sds->this_nr_running = sgs.sum_nr_running;
>> >> +                     sds->this_group_capacity = sgs.group_capacity;
>> >>                       sds->this_load_per_task = sgs.sum_weighted_load;
>> >>               } else if (update_sd_pick_busiest(sd, sds, sg, &sgs, this_cpu)) {
>> >>                       sds->max_load = sgs.avg_load;
>
> OK, but then you assume that local_group will always be the first group
> served, nor is there any purpose for adding sds->this_group_capacity,
> you could keep that local to this function.
>

Yes, this patch makes the assumption that local_group is the first.

About this_group_capacity, yes -- we don't need the additional field in
sg_lb_stats. We can make it local to this function. I just realized that if
we re-order the patches, we can reuse sgs.has_capacity from the next patch.

> For regular balancing local_group will be the first, since we only
> ascend the domain tree on the local groups. But its not true for no_hz
> balancing afaikt.
>

As Suresh points out, even with NOHZ, the local_group is the first group
since we ascend the per-cpu sched domain. I can add this into the comments
to make it clear.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2010-10-15 19:31    [W:0.124 / U:0.404 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site