lkml.org 
[lkml]   [2006]   [Jun]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH RFC] smt nice introduces significant lock contention
Con Kolivas wrote:
> On Friday 02 June 2006 17:53, Nick Piggin wrote:
>
>>This is a small micro-optimisation / cleanup we can do after
>>smtnice gets converted to use trylocks. Might result in a little
>>less cacheline footprint in some cases.
>
>
> It's only dependent_sleeper that is being converted in these patches. The
> wake_sleeping_dependent component still locks all runqueues and needs to

Oh I missed that.

> succeed in order to ensure a task doesn't keep sleeping indefinitely. That

Let's make it use trylocks as well. wake_priority_sleeper should ensure
things don't sleep forever I think? We should be optimising for the most
common case, and in many workloads, the runqueue does go idle frequently.

> one doesn't get called from schedule() so is far less expensive. This means I
> don't think we can change that cpu based locking order which I believe was
> introduce to prevent a deadlock (?DaveJ disovered it iirc).
>

AntonB, I think.

--
SUSE Labs, Novell Inc.
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c 2006-06-02 18:23:18.000000000 +1000
+++ linux-2.6/kernel/sched.c 2006-06-02 18:26:40.000000000 +1000
@@ -2686,6 +2686,9 @@ static inline void wakeup_busy_runqueue(
resched_task(rq->idle);
}

+/*
+ * Called with interrupts disabled and this_rq's runqueue locked.
+ */
static void wake_sleeping_dependent(int this_cpu, runqueue_t *this_rq)
{
struct sched_domain *tmp, *sd = NULL;
@@ -2699,22 +2702,13 @@ static void wake_sleeping_dependent(int
if (!sd)
return;

- /*
- * Unlock the current runqueue because we have to lock in
- * CPU order to avoid deadlocks. Caller knows that we might
- * unlock. We keep IRQs disabled.
- */
- spin_unlock(&this_rq->lock);
-
sibling_map = sd->span;
-
- for_each_cpu_mask(i, sibling_map)
- spin_lock(&cpu_rq(i)->lock);
- /*
- * We clear this CPU from the mask. This both simplifies the
- * inner loop and keps this_rq locked when we exit:
- */
cpu_clear(this_cpu, sibling_map);
+ for_each_cpu_mask(i, sibling_map) {
+ if (unlikely(!spin_trylock(&cpu_rq(i)->lock)))
+ cpu_clear(i, sibling_map);
+ }
+

for_each_cpu_mask(i, sibling_map) {
runqueue_t *smt_rq = cpu_rq(i);
@@ -2724,10 +2718,6 @@ static void wake_sleeping_dependent(int

for_each_cpu_mask(i, sibling_map)
spin_unlock(&cpu_rq(i)->lock);
- /*
- * We exit with this_cpu's rq still held and IRQs
- * still disabled:
- */
}

/*
@@ -2961,13 +2951,6 @@ need_resched_nonpreemptible:
next = rq->idle;
rq->expired_timestamp = 0;
wake_sleeping_dependent(cpu, rq);
- /*
- * wake_sleeping_dependent() might have released
- * the runqueue, so break out if we got new
- * tasks meanwhile:
- */
- if (!rq->nr_running)
- goto switch_tasks;
}
}
\
 
 \ /
  Last update: 2006-06-02 10:31    [W:0.059 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site