lkml.org 
[lkml]   [2009]   [Mar]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[RFC PATCH 11/11] sched: Add comments to find_busiest_group() function.
Date
Add /** style comments around find_busiest_group(). Also add a few explanatory
comments.
This concludes the find_busiest_group() cleanup. The function is down to 72
lines from the original 313 lines.

Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
---
kernel/sched.c | 50 ++++++++++++++++++++++++++++++++++++++++++--------
1 files changed, 42 insertions(+), 8 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 6404ddf..47a1a7d 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3572,10 +3572,30 @@ static inline void calculate_imbalance(struct sd_lb_stats *sds, int this_cpu,
}
/******* find_busiest_group() helpers end here *********************/

-/*
- * find_busiest_group finds and returns the busiest CPU group within the
- * domain. It calculates and returns the amount of weighted load which
- * should be moved to restore balance via the imbalance parameter.
+/**
+ * find_busiest_group - Returns the busiest group within the sched_domain
+ * if there is an imbalance. If there isn't an imbalance, and
+ * the user has opted for power-savings, it returns a group whose
+ * CPUs can be put to idle by rebalancing those tasks elsewhere, if
+ * such a group exists.
+ *
+ * Also calculates the amount of weighted load which should be moved
+ * to restore balance.
+ *
+ * @sd: The sched_domain whose busiest group is to be returned.
+ * @this_cpu: The cpu for which load balancing is currently being performed.
+ * @imbalance: Variable which stores amount of weighted load which should
+ * be moved to restore balance/put a group to idle.
+ * @idle: The idle status of this_cpu.
+ * @sd_idle: The idleness of sd
+ * @cpus: The set of CPUs under consideration for load-balancing.
+ * @balance: Pointer to a variable indicating if this_cpu
+ * is the appropriate cpu to perform load balancing at this_level.
+ *
+ * Returns: - the busiest group if imbalance exists.
+ * - If no imbalance and user has opted for power-savings balance,
+ * return the least loaded group whose CPUs can be
+ * put to idle by rebalancing its tasks onto our group.
*/
static struct sched_group *
find_busiest_group(struct sched_domain *sd, int this_cpu,
@@ -3593,17 +3613,31 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
update_sd_lb_stats(sd, this_cpu, idle, sd_idle, cpus,
balance, &sds);

+ /* Cases where imbalance does not exist from POV of this_cpu */
+ /* 1) this_cpu is not the appropriate cpu to perform load balancing
+ * at this level.
+ * 2) There is no busy sibling group to pull from.
+ * 3) This group is the busiest group.
+ * 4) This group is more busy than the avg busieness at this
+ * sched_domain.
+ * 5) The imbalance is within the specified limit.
+ * 6) Any rebalance would lead to ping-pong
+ */
if (balance && !(*balance))
goto ret;

- if (!sds.busiest || sds.this_load >= sds.max_load
- || sds.busiest_nr_running == 0)
+ if (!sds.busiest || sd.busiest_nr_running == 0)
+ goto out_balanced;
+
+ if (sds.this_load >= sds.max_load)
goto out_balanced;

sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr;

- if (sds.this_load >= sds.avg_load ||
- 100*sds.max_load <= sd->imbalance_pct * sds.this_load)
+ if (sds.this_load >= sds.avg_load)
+ goto out_balanced;
+
+ if (100 * sds.max_load <= sd->imbalance_pct * sds.this_load)
goto out_balanced;

sds.busiest_load_per_task /= sds.busiest_nr_running;


\
 
 \ /
  Last update: 2009-03-25 10:21    [W:0.599 / U:1.000 seconds]
©2003-2014 Jasper Spaans. Advertise on this site