lkml.org 
[lkml]   [2006]   [Jun]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRE: [PATCH RFC] smt nice introduces significant lock contention
    Date
    Nick Piggin wrote on Friday, June 02, 2006 1:56 AM
    > Chen, Kenneth W wrote:
    >
    > > Ha, you beat me by one minute. It did cross my mind to use try lock there as
    > > well, take a look at my version, I think I have a better inner loop.
    >
    > Actually you *have* to use trylocks I think, because the current runqueue
    > is already locked.
    >
    > And why do we lock all siblings in the other case, for that matter? (not
    > that it makes much difference except on niagara today).
    >
    > Rolled up patch with everyone's changes attached.


    OK, it's down to nit-picking now:

    Remove this_rq argument from wake_sleeping_dependent() since it is not
    used. Nick, you had that in your earlier version, but it got lost in
    the woods.

    I don't like cpumask being declared on the stack. Here is my version
    to rid it out in wake_sleeping_dependent() and dependent_sleeper().

    - Ken


    --- ./kernel/sched.c.orig 2006-06-02 03:20:40.000000000 -0700
    +++ ./kernel/sched.c 2006-06-02 03:20:59.000000000 -0700
    @@ -2714,10 +2714,9 @@ static inline void wakeup_busy_runqueue(
    /*
    * Called with interrupts disabled and this_rq's runqueue locked.
    */
    -static void wake_sleeping_dependent(int this_cpu, runqueue_t *this_rq)
    +static void wake_sleeping_dependent(int this_cpu)
    {
    struct sched_domain *tmp, *sd = NULL;
    - cpumask_t sibling_map;
    int i;

    for_each_domain(this_cpu, tmp)
    @@ -2728,10 +2727,11 @@ static void wake_sleeping_dependent(int
    if (!sd)
    return;

    - sibling_map = sd->span;
    - cpu_clear(this_cpu, sibling_map);
    - for_each_cpu_mask(i, sibling_map) {
    + for_each_cpu_mask(i, sd->span) {
    runqueue_t *smt_rq = cpu_rq(i);
    +
    + if (i == this_cpu)
    + continue;
    if (unlikely(!spin_trylock(&smt_rq->lock)))
    continue;

    @@ -2761,7 +2761,6 @@ static int dependent_sleeper(int this_cp
    struct task_struct *p)
    {
    struct sched_domain *tmp, *sd = NULL;
    - cpumask_t sibling_map;
    int ret = 0, i;

    for_each_domain(this_cpu, tmp)
    @@ -2772,12 +2771,13 @@ static int dependent_sleeper(int this_cp
    if (!sd)
    return 0;

    - sibling_map = sd->span;
    - cpu_clear(this_cpu, sibling_map);
    - for_each_cpu_mask(i, sibling_map) {
    + for_each_cpu_mask(i, sd->span) {
    runqueue_t *smt_rq;
    task_t *smt_curr;

    + if (i == this_cpu)
    + continue;
    +
    smt_rq = cpu_rq(i);
    if (unlikely(!spin_trylock(&smt_rq->lock)))
    continue;
    @@ -2842,7 +2842,7 @@ check_smt_task:
    return ret;
    }
    #else
    -static inline void wake_sleeping_dependent(int this_cpu, runqueue_t *this_rq)
    +static inline void wake_sleeping_dependent(int this_cpu)
    {
    }

    @@ -2973,7 +2973,7 @@ need_resched_nonpreemptible:
    if (!rq->nr_running) {
    next = rq->idle;
    rq->expired_timestamp = 0;
    - wake_sleeping_dependent(cpu, rq);
    + wake_sleeping_dependent(cpu);
    goto switch_tasks;
    }
    }
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/
    \
     
     \ /
      Last update: 2006-06-02 11:39    [W:0.024 / U:32.088 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site