lkml.org 
[lkml]   [2007]   [Jun]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC][PATCH 5/6] core changes for group fairness
    On 11/06/07, Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> wrote:
    > This patch introduces the core changes in CFS required to accomplish
    > group fairness at higher levels. It also modifies load balance interface
    > between classes a bit, so that move_tasks (which is centric to load
    > balance) can be reused to balance between runqueues of various types
    > (struct rq in case of SCHED_RT tasks, struct lrq in case of
    > SCHED_NORMAL/BATCH tasks).

    a few things that catched my eye, please see below:


    > +static int balance_tasks(struct rq *this_rq, int this_cpu, struct rq *busiest,
    > + unsigned long max_nr_move, unsigned long max_load_move,
    > + struct sched_domain *sd, enum idle_type idle,
    > + int *all_pinned, unsigned long *load_moved,
    > + int this_best_prio, int best_prio, int best_prio_seen,
    > + void *iterator_arg,
    > + struct task_struct *(*iterator_start)(void *arg),
    > + struct task_struct *(*iterator_next)(void *arg));

    IMHO, it looks a bit frightening :) maybe it would be possible to
    create a structure that combines some relevant argumens .. at least,
    the last 3 ones.


    > -static int move_tasks(struct rq *this_rq, int this_cpu, struct rq *busiest,
    > +static int balance_tasks(struct rq *this_rq, int this_cpu, struct rq *busiest,
    > unsigned long max_nr_move, unsigned long max_load_move,
    > struct sched_domain *sd, enum idle_type idle,
    > - int *all_pinned)
    > + int *all_pinned, unsigned long *load_moved,
    > + int this_best_prio, int best_prio, int best_prio_seen,
    > + void *iterator_arg,
    > + struct task_struct *(*iterator_start)(void *arg),
    > + struct task_struct *(*iterator_next)(void *arg))

    I think, there is a possible problem here. If I'm not complete wrong,
    this function (move_tasks() in the current mainline) can move more
    'load' than specified by the 'max_load_move'..

    as a result, e.g. in the following code :

    > +static int move_tasks(struct rq *this_rq, int this_cpu, struct rq *busiest,
    > + unsigned long max_nr_move, unsigned long max_load_move,
    > + struct sched_domain *sd, enum idle_type idle,
    > + int *all_pinned)
    > +{
    > + struct sched_class *class = sched_class_highest;
    > + unsigned long load_moved, total_nr_moved = 0, nr_moved;
    > +
    > + do {
    > + nr_moved = class->load_balance(this_rq, this_cpu, busiest,
    > + max_nr_move, max_load_move, sd, idle,
    > + all_pinned, &load_moved);
    > + total_nr_moved += nr_moved;
    > + max_nr_move -= nr_moved;
    > + max_load_move -= load_moved;

    can become negative.. and as it's 'unsigned' --> a huge positive number..

    > + class = class->next;
    > + } while (class && max_nr_move && max_load_move);

    '(long)max_load_move > 0' ?

    the same is applicable to a few other similar cases below :

    > +static int
    > +load_balance_fair(struct rq *this_rq, int this_cpu, struct rq *busiest,
    > + unsigned long max_nr_move, unsigned long max_load_move,
    > + struct sched_domain *sd, enum idle_type idle,
    > + int *all_pinned, unsigned long *total_load_moved)
    > +{
    > + struct lrq *busy_lrq;
    > + unsigned long load_moved, total_nr_moved = 0, nr_moved, rem_load_move;
    > +
    > + rem_load_move = max_load_move;
    > +
    > + for_each_leaf_lrq(busiest, busy_lrq) {
    > + struct lrq *this_lrq;
    > + long imbalance;
    > + unsigned long maxload;
    > + int this_best_prio, best_prio, best_prio_seen = 0;
    > +
    ..........
    > +
    > + nr_moved = balance_tasks(this_rq, this_cpu, busiest,
    > + max_nr_move, maxload, sd, idle, all_pinned,
    > + &load_moved, this_best_prio, best_prio,
    > + best_prio_seen,
    > + /* pass busy_lrq argument into
    > + * load_balance_[start|next]_fair iterators
    > + */
    > + busy_lrq,
    > + load_balance_start_fair,
    > + load_balance_next_fair);
    > +
    > + total_nr_moved += nr_moved;
    > + max_nr_move -= nr_moved;
    > + rem_load_move -= load_moved;

    here


    > +
    > + /* todo: break if rem_load_move is < load_per_task */
    > + if (!max_nr_move || !rem_load_move)

    '(long)rem_load_move <= 0'

    and I think somewhere else in the code.


    > --
    > Regards,
    > vatsa
    >

    --
    Best regards,
    Dmitry Adamushko
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2007-06-13 22:59    [W:4.148 / U:0.044 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site