[lkml]   [2006]   [Nov]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Subject[PATCH 0/8] sched_domain balancing via softirq V4
    This patchset moves more or less expensive load balancing out of the scheduler
    tick (where we run with interrupts disabled) into a softirq that is triggered
    if necessary from scheduler_tick(). Load balancing will then run with interrupts
    enabled. This first of all reduces interrupt holdoff times.

    The moving of the load balancing into a softirq allows some cleanup in
    scheduler_tick(). It is easier to read and the determination of the state
    for load balancing can be moved out of scheduler_tick(). We can decouple
    load balancing from scheduler_tick(). Load balancing is then only triggered
    on demand via the softirq. On a dual core processor (SMP) system load
    balacing is triggered in less than 30% of all ticks.

    The timer ticks are already staggered by arch initialization. It is not
    necessary to stagger load balancing if the load balancing takes a reasonably
    small time since it is part of the timer tick processing. Lower sched domains
    generally fall into that category. We remove the staggering from
    the scheduler.

    We add a spinlock for the higher sched_domains that may require longer
    scan times. A new flag SD_SERIALIZE can be set for a sched domain. Then
    we insure that balancing only occurs once on the whole machine for the
    sched domains that have SD_SERIALIZE set. This guarantees exclusion
    even if balancing runs for a long time. The staggering was not able
    to make this guarantee.

    The serialization insures that we do not run into issues where multiple
    processors load balance at the same time and then attempt to draw
    processes of the same remote processor. It limits the load that
    can be generated by load balancing for large and very large systems.

    There are some other ideas around on how to optimize scheduler
    performance for high processor counts (like Suresh's approach
    to only load balance for a single processor in a domain and Ken's idea
    of rewriting the scheduler load balancing to be more flexible) but
    none of those are ready for prime time yet. These approaches could
    replace serialization in the future.

    The serialization for the NUMA scheduling alone also means that
    the number of times that scheduling has to be deferred drops significantly
    and will only occur in case of large scale NUMA balancing.
    Load balancing on a particular node is not that critical (especially
    with Suresh's latest patch that places all sched_groups on the node) since
    accesses are node local and generally do not require transactions on the
    NUMA interlink.

    Tested on
    UP: x86_64
    SMP: i386 dual core Pentium 940
    NUMA: Altix 8p 256p

    For the earlier discussion see:


    - Keep last_balance and calculate the next balancing from that start
    - Move more code into time_slice calculation and rename time_slice()
    to task_running_tick().
    - Separate out the wake_priority_sleeper optimization as a first patch.

    - Rediff against 2.6.19-rc4-mm2
    - Remove useless check for rq->idle in rebalance_domains()

    - Use softirq instead of a tasklet
    - Remove load staggering.
    - Add lock to run some sched domains single threaded.
    - Use jiffy comparison functions

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2006-11-14 21:39    [W:0.037 / U:35.752 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site