lkml.org 
[lkml]   [2013]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v4 0/6] sched: use runnable load based balance

> That should probably look like:
>
> preempt_disable();
> raw_spin_unlock_irq();
> preempt_enable_no_resched();
> schedule();
>
> Otherwise you might find a performance regression on PREEMPT=y kernels.

Yes, right!
Thanks a lot for reminder. The following patch will fix it.
>
> OK, so what I was asking after is if you changed the scheduler after PJTs
> patches landed to deal with this bulk wakeup. Also while aim7 might no longer
> trigger the bad pattern what is to say nothing ever will? In particular
> anything using pthread_cond_broadcast() is known to be suspect of bulk wakeups.

Just find a benchmark named as pthread_cond_broadcast.
http://kristiannielsen.livejournal.com/13577.html. will play with it. :)
>
> Anyway, I'll go try and make sense of some of the actual patches.. :-)
>

---

From 4c9b4b8a9b92bcbe6934637fd33c617e73dbda97 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@intel.com>
Date: Fri, 3 May 2013 14:51:25 +0800
Subject: [PATCH 8/8] rwsem: small optimizing rwsem_down_failed_common

Peter Zijlstra suggest adding a preempt_enable_no_resched() to prevent
a unnecessary scheduler in raw_spin_unlock.
And we also can pack 2 raw_spin_lock to save one. So has this patch.

Thanks Peter!

Signed-off-by: Alex Shi <alex.shi@intel.com>
---
lib/rwsem.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/lib/rwsem.c b/lib/rwsem.c
index ad5e0df..9aacf81 100644
--- a/lib/rwsem.c
+++ b/lib/rwsem.c
@@ -212,23 +212,25 @@ rwsem_down_failed_common(struct rw_semaphore *sem,
adjustment == -RWSEM_ACTIVE_WRITE_BIAS)
sem = __rwsem_do_wake(sem, RWSEM_WAKE_READ_OWNED);

- raw_spin_unlock_irq(&sem->wait_lock);
-
/* wait to be given the lock */
for (;;) {
- if (!waiter.task)
+ if (!waiter.task) {
+ raw_spin_unlock_irq(&sem->wait_lock);
break;
+ }

- raw_spin_lock_irq(&sem->wait_lock);
- /* Try to get the writer sem, may steal from the head writer: */
+ /* Try to get the writer sem, may steal from the head writer */
if (flags == RWSEM_WAITING_FOR_WRITE)
if (try_get_writer_sem(sem, &waiter)) {
raw_spin_unlock_irq(&sem->wait_lock);
return sem;
}
+ preempt_disable();
raw_spin_unlock_irq(&sem->wait_lock);
+ preempt_enable_no_resched();
schedule();
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
+ raw_spin_lock_irq(&sem->wait_lock);
}

tsk->state = TASK_RUNNING;
--
1.7.12


\
 
 \ /
  Last update: 2013-05-03 10:41    [W:0.110 / U:0.276 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site